report
stringlengths 320
1.32M
| summary
stringlengths 127
13.7k
|
---|---|
When the Social Security Act was passed in 1935, state and local government employees were excluded from Social Security. As a result, some state and local government workers who were not covered by a retirement system were left without benefits when they retired. To help these employees, in 1950, Congress added section 218 to the Social Security Act allowing states to enter into voluntary agreements to provide Social Security coverage to certain state and local government employees. Section 218 authorizes the 50 states, Puerto Rico, and the Virgin Islands to enter into these agreements. Although under section 21 of the Act, the District of Columbia, Guam, the Commonwealth of the Northern Mariana Islands, and American Samoa are excluded from the definition of “state,” employees within these territories can have Socia l Security coverage under other provisions of the Act. Within a year of this amendment, about 30 states had executed section 218 agreements with t Social Security Administration. Subsequently, additional amendments to the Social Security Act changed Social Security and Medicare coverage for state and local government workers. Starting in 1991, the Social Security Act required all state and local government employees to be covered by Social Security if they were not covered by a qualifying state or local retirement system. Table 1 describes some of these amendments relating to the coverage of state and local government workers. More recently, Social Security has projected future financial shortfalls in its programs. According to Social Security’s Board of Trustees, the program’s annual surpluses of tax income over expenditures are expected to turn to cash flow deficits this year before turning positive again in 2012. In addition, all of the accumulated Treasury obligations held by the trust funds are expected to be exhausted by 2037. Once exhausted, annual program revenue will be sufficient to pay only about 78 percent of scheduled benefits in 2037 (and gradually declining to 75 percent by 2084), according to the Social Security trustees’ 2010 intermediate assumptions. Many options have been proposed to help assure the financial stability of Social Security, among them requiring all newly hired public employees to participate in the program. Although this approach could improve Social Security’s finances at least temporarily and would simplify Social Security as it pertains to public employees, we have previously reported that such a change could also result in increased costs for the affected governments and their employees. The extent to which public employees are covered by Social Security varies greatly from state to state. For example, according to SSA data, in Vermont, 98 percent of public employees are covered, but in Ohio, only about 3 percent are covered. Figure 1 shows the variation in Social Security coverage of public employees among states, and appendix II provides the amount of covered and noncovered earnings by employees in each state. Within states, there is also variation in Social Security coverage among public employees working for the same employer. Some public employers provide a retirement system for some of their employees who meet certain criteria. If employees do not meet these criteria and are ineligible for the retirement system such as a pension system, they are covered by Social Security. In other instances, public employers may choose to provide only Medicare coverage rather than both Social Security and Medicare. All states have a section 218 agreement with the SSA that allows them to extend Social Security and/or Medicare coverage to designated public employees. With an agreement in force, SSA and the state can coordinate and ensure that granting coverage to public employees complies with applicable state and federal laws, since according to SSA and state officials, state laws can restrict certain employees who are members of other retirement plans from receiving Social Security coverage. SSA requires states to designate a state employee as a state Social Security administrator and establishes the basic roles and responsibilities for these administrators. For example, the guidance outlines that state administrators should serve as a bridge between state and local public employers and federal agencies, as well as administer and maintain the Social Security coverage agreement. If public employers within the states wish to extend Social Security coverage to their employees, their state administrator files a draft amendment to the coverage agreement—known as a modification—with their SSA regional office. After the state process is completed and the SSA Regional Office approves the modification, the public employer should begin withholding Social Security and Medicare taxes for the employee positions that are covered and send information on earnings to SSA. SSA is required by law to maintain accurate earnings records for all workers. SSA uses an employee’s earnings record to calculate the amount of Social Security benefits—retirement, disability, or survivor benefits—for an individual or their dependents. Covered earnings, which are posted to the earning record, are subject to Social Security and Medicare taxes paid by employers and employees. IRS is responsible for assuring state and local government employers are properly paying Social Security and Medicare taxes (also known as FICA taxes). Figure 2 shows the major responsibilities for these government partners. SSA has an established process for working with states to approve coverage. This approval process is intended to ensure that public employers follow applicable state and federal laws regarding Social Security coverage, as some state laws exclude certain types of employees from receiving Social Security coverage, according to SSA and state officials. For example, current New Hampshire law prohibits Social Security coverage for police and fire fighters, who belong to a distinct, more generous pension plan than other public employees in New Hampshire, according to state officials. To obtain Social Security coverage, public employers first contact their state Social Security administrator who files an amendment—known as a modification—to the state’s coverage agreement with SSA. Because all states already have an approved agreement with SSA, any changes to include additional public employers are modifications to the agreement. If the coverage is proposed for employees who are members of a retirement system, then a favorable vote of eligible employees is required. The SSA regional office reviews the modification to ensure that it complies with all relevant laws and procedures. If it is determined these public employees are authorized for coverage, the regional office approves the modification and transmits it back to the state. After coverage has been approved, the public employer begins withholding Social Security and Medicare taxes for the employees in covered positions. Under certain circumstances, SSA may approve retroactive coverage, which is effective prior to the date that SSA approves the modification. Figure 3 shows the modification approval process. States may file modifications to their coverage agreement on behalf of public employers under a variety of circumstances. For example, SSA guidance specifies that a state is to amend its agreement to (1) extend coverage to new groups of employees, (2) identify new public employers joining a public retirement system, (3) correct errors in coverage, (4) implement changes in federal or state law, and (5) in very limited circumstances, make certain exclusions to previously covered services or positions. According to our survey of state Social Security administrators, we found that administrators in 36 states had approved a modification in the last 5 years. Of these 36 states, the most commonly cited reasons for approving a modification were to include additional coverage groups (23 states), followed by correcting coverage errors (20 states), and notifying SSA of new public employers joining a retirement system that SSA has already approved for coverage on a statewide basis (19 states). States do not always notify SSA of changes to covered public employ which can lead to errors in the accuracy of SSA records. Under SSA guidance, state administrators are to provide notice and evidence to SSA y when a public employer legally ceases to exist, or dissolves. Our surve of state administrators showed that SSA does not consistently rece information from states about dissolutions. Only 9 states reported collecting information on all dissolutions among their public employers, while 16 states reported collecting little or none of this information. For example, in one state we visited, over 100 school employees were granted retroactive coverage a decade after their school district had been formed. The new school district was formed by consolidating two school districts that had dissolved, but an amendment to the state’s coverage agreement had not been approved at the time of the consolidation to reflect the change. Also, when existing employers legally consolidate, anoth modification may be necessary to provide coverage for the new consolidated employer. While 11 states responded that they collect er information on all consolidations that occur among their public employers, 14 states responded that they collect little to none of this information. Another 7 states reported that they did not know how much information they collect on dissolutions or consolidations. If states do no collect information on dissolutions or consolidations, they do not know about these changes to public employers and are unable to work with SSA to approve coverage and prevent errors. All states have a state Social Security administrator who is respons managing Social Security coverage for both state and local public employers, but state administrators vary in their efforts to implement S guidelines. SSA has established the basic roles and responsibilitie s for these administrators by providing guidance on administering the provisions of the state Social Security agreement (see app. IV). However SSA’s guidance is broad and does not specify how a state administrator should fulfill these responsibilities. As a result, state administrators vary in the extent to which they meet their responsibilities. For example, whi SSA’s guidance notes that state administrators are to administer and maintain the coverage agreement, the guidance does not provide detail on the types of activities that are necessary for meeting this responsibility— the types of activities that are necessary for meeting this responsibility— such as the frequency with which modifications should be reviewed t such as the frequency with which modifications should be reviewed t determine whether changes to public employers have occurred. For o determine whether changes to public employers have occurred. For o example, as noted above, both New York and Missouri were unclear on example, as noted above, both New York and Missouri were unclear on their administrative responsibility, resulting in both states bein their administrative responsibility, resulting in both states bein coverage errors. Additionally, SSA’s guidance notes that state g at risk for coverage errors. Additionally, SSA’s guidance notes that state g at risk for administrators should advise public employers on Social Security, administrators should advise public employers on Social Security, Medicare, and tax withholding issues; and according to our survey, only 14 Medicare, and tax withholding issues; and according to our survey, only 14 states reported doing this to a very great or great extent. Likewise, onl states reported doing this to a very great or great extent. Likewise, onl y 18 y 18 states reported following SSA’s guidance on providing information to states reported following SSA’s guidance on providing information to public employers on policies, procedures, and standards to a very great or public employers on policies, procedures, and standards to a very great or great extent (see fig. 4). great extent (see fig. 4). In the absence of more detailed SSA or other guidance on how states should manage Social Security coverage for state and local public employers, the National Conference of State Social Security Administrators (NCSSSA) in 2003 developed a list of recommended practices for use by state administrators. These recommended practices help state administrators to carry out SSA’s guidance. For example, one NCSSSA practice recommends that state administrators maintain an electronic database so that they can meet the SSA guidance on maintaining physical custody of Social Security coverage agreements. While 37 states reported maintaining an electronic database of state and local public employers with Social Security coverage, we found that only 28 of these states’ databases include more detailed coverage information such as the date of each employer’s modifications (see table 2). Moreover, 14 states could not provide the total number of public employers with approved coverage for their employees in their state. We also found differences in the extent to which states review these databases to check for accuracy and completeness. Of the 37 states with an electronic database, 5 states reported not updating their information and 1 state did not know how often they updated their database information. Further, only 7 states reported taking all of the following steps to ensure the information was reliable: conducting routine monitoring of the data, using edit checks to identify out-of-range entries, and verifying the data for accuracy. SSA’s guidance also sets forth that state administrators are to provide certain information or advice to public employers, but falls short in denoting specific ways such outreach activities can be carried out, such as the format for distributing information and time frames for carrying out such activities. For example, one state administrator told us that he regularly attended local public employer association conferences so that he could identify new public employers and provide advice to them. However, officials in another state told us that they did not have any formal outreach practices and updated their information on new public employers when they read about them in the newspaper. As a result, the state administrator could not ensure that its list of public employers was current. While SSA’s guidance is limited, NCSSSA has developed recommended practices for conducting outreach efforts to public employers, such as presenting at local association meetings, providing information via a Web site or newsletters, or pursuing other means of outreach. Such efforts can help states educate and respond to questions about coverage agreements. According to our survey, nine states reported regularly (i.e., at least annually) distributing a newsletter or providing training, while just over one-quarter of states contact public employers included in coverage agreements to update their information (see fig. 5). In contrast, 21 states reported that they do not conduct any of these outreach activities. Ten of these states have nearly universal Social Security coverage for their public employees and four states had less than half of their public employees covered by Social Security. The variation in how states implement the activities outlined in SSA guidance can also be explained in part by the training, experience, and staffing of state administrators. Some state administrators reported they were initially unfamiliar with coverage agreements and noted there was little or no transfer of knowledge to help them learn about coverage issues. Twenty-seven administrators reported receiving little or no training from their predecessor. Of those administrators who had not received training, 93 percent had never worked on Social Security coverage issues at all prior to becoming the administrator. Administrators cited several reasons for the lack of training or knowledge-sharing by predecessors, including classification of these positions (e.g., political appointees), turnover among staff, and lack of funding. To address this training gap, NCSSSA developed a training module which they recently began providing to state administrators. As of July 2010, 11 state administrators have received this training, according to NCSSSA officials we interviewed. Additionally, in our survey, the availability of staff with expertise in coverage agreements was identified as a great or very great challenge by 19 states. Differences in the amount of time dedicated to the position of state administrator also varied among states. Most state administrators view the role as an ancillary responsibility, and not as their primary duty. Over half of those working as state administrators reported spending 10 percent or less of their time on state administrator responsibilities. SSA relies primarily on public employers to correctly interpret their coverage and accurately report covered wages of public employees, according to SSA officials. However, some public employers do not understand that a modification to the state’s agreement with SSA is required before amending coverage under section 218 and reporting Social Security wages. For example, a small fire district in one state reported Social Security wages for more than a decade without approved coverage to do so, not realizing coverage under an agreement between SSA and the state was required. Several SSA officials told us that they also rely on IRS to review the compliance of public employers. The Social Security Act requires SSA to ensure that all workers have accurate earnings records. SSA requires employers—public and private— to use SSA’s process of wage reporting (see fig. 6) to report Social Security covered wages. In 2007, private and public employers reported nearly $5 trillion in covered wages, with public employers representing $528 billion of that amount. (See app. II on covered and estimated noncovered wages for state and local government employment in 2007.) The Form W-2 is the annual report of a worker’s wages, including wages covered for Social Security and for Medicare. SSA posts the wages to the employee’s earnings record on its Master Earnings File and provides IRS with the W-2 information so IRS can monitor accurate payment of Social Security taxes. SSA and IRS annually match the amounts on Form W-2 with wages that employers report to IRS on a quarterly basis. When the amounts match, no further steps are taken. When the amounts do not match, SSA and IRS have processes to reconcile the amounts, including letters to contact the employer. SSA does not have a process to ensure that public employers only report wages for covered employees and that such wages are associated with valid coverage under the state’s coverage agreement. As long as the wage amounts on the Forms W-2 and 941 match, SSA does not follow up to ensure that reported wages actually reflect public employees who are covered by their state’s agreement. SSA officials told us the agency does not compare the reported wages with coverage modifications applicable to the employer. While wage reports identify employees by their name and Social Security number, procedures and data do not exist to verify that employees are in positions that are covered by their state’s agreement. SSA regional officials told us they answer questions by public employers about whether employees are covered based on their interpretation of coverage agreements. However, SSA officials are not able to check if the public employers correctly report covered earnings. While SSA does not currently monitor the accuracy of public employee coverage, prior to 1987, SSA conducted regular oversight activities to ensure more accurate reporting. Prior to 1987, state administrators gathered Social Security payments in lieu of FICA taxes from public employers with approved coverage. States were therefore accountable for payments from public employers and employees in their state. SSA was responsible for ensuring that state and local government employers made the correct payments for the Social Security Trust Funds. Given its responsibility, SSA conducted compliance reviews and collected data on public employers, such as lists of which public employers were part of the coverage agreement. In 1987, a legislative change took effect requiring the IRS to collect Social Security taxes from public employers and employees directly. As a result, public employers were required to withhold Social Security taxes from their employees and pay taxes to the Treasury using the same procedures as private sector employers. SSA and the states reduced staffing, management attention, and oversight of coverage agreements. SSA also reduced its oversight of public employers, including discontinuing compliance reviews and ending certain data collection. In 1996, SSA’s Inspector General found that many public employers were at risk of not complying with their states’ coverage agreements, partly due to SSA’s reduced focus on administration after this statutory change. The Inspector General recommended that SSA pursue regular compliance reviews; develop a Memorandum of Understanding (MOU) with IRS; and study the possibility of universal coverage of public employers to eliminate the inherent complexity of their coverage. In 2002, SSA and IRS signed an MOU regarding the compliance of state and local government employers that specified each agency’s role, including IRS’s responsibility to conduct compliance reviews of public employers. Among other things, the MOU established a joint SSA-IRS committee to share information on policies, procedures, and compliance issues. SSA continues to lack basic data on the public employers for which it has approved coverage, preventing the agency from monitoring potential errors. According to Standards for Internal Control in the Federal Government, data are important for an agency to manage its operations and measure its activities. However, SSA does not track the number of public employers that are under a state’s approved coverage agreement or various activities that could expose public employers to greater risk of committing coverage errors. From data given to us by all 10 SSA regions, we estimated that since 1951 when coverage agreements began, SSA has approved as many as 28,798 modifications extending Social Security coverage for public employers. (See app. III for information on the number and year of the last modification approved by SSA for each state as of January 1, 2010.) However, 6 of 10 SSA regional offices no longer collect any information on which public employers have approved coverage, and SSA officials told us they have not required regional offices to update their data, partly due to resource constraints. SSA has also not provided the regional offices with guidelines for what should be collected and how. As a result, six regions currently collect no data at all, while the four regions still collecting data varied in the data formats and level of detail of the information collected. For example, based on data we reviewed from regional officials, one region had a database with details on public employers and their coverage, while another region had a list with little information other than the names of public employers and the date that SSA approved coverage. Without comprehensive and uniform data, SSA may miss opportunities to prevent or more quickly correct errors related to public employee wages. For example, if all regions tracked information such as recent approved modifications, SSA could better identify which states had less activity, and could follow up to ensure that those states and public employers were aware of the circumstances that would warrant filing a modification. In addition, SSA is unable to fully support IRS in its efforts to ensure compliance. For example, SSA does not validate IRS’s database of public employers—including covered employers—which may not always contain correct data. Moreover, the lack of current or consistently tracked data can limit the efficiency with which regions research or answer questions about a particular employer. For example, one SSA regional office official said that in order to identify a modification with information relevant to a particular employer, it takes up to an hour to manually search paper files for any modification made after 1987. Officials in nearly all 10 SSA regions told us their oversight efforts to ensure accurate reporting of public employers generally involve reacting to errors or questions brought to their attention. When a concern is identified, SSA regional officials respond to address the coverage of a particular employer based on specific facts and circumstances. For example, IRS conducted an audit of a public port and worked with SSA to determine whether the employer had covered employees, according to SSA officials and documents. SSA determined that the employer’s predecessor had a modification for coverage, but the new employer did not have coverage for its full-time employees. SSA assisted the state and the employer to file a modification that would retroactively grant coverage to these employees. Had SSA actively worked with the state and used data to observe trends with modifications, the state and SSA may have prevented this error or caught it sooner. SSA has also been asked to resolve errors involving public employers that are subject to a modification, but these employers and their employees have not paid Social Security taxes. If SSA was notified of the error and evidence of employees’ earnings was produced by employers or employees, SSA officials told us that the agency would correct their earnings records. IRS is authorized to collect back-taxes subject to its statute of limitations, which is generally 3 years. Unfortunately, some of the coverage errors in Missouri school districts involved public employers and employees who stopped paying Social Security taxes in the 1980s. Thus, the U.S. Treasury and Social Security Trust Funds effectively bear the cost of any taxes employers or employees did not pay beyond the 3- year statute of limitations, according to SSA and IRS officials. Similarly, if an error goes undetected or uncorrected, then public employees may not have Social Security earnings posted to their record. This could result in employees who should be covered by Social Security not becoming eligible or not receiving the appropriate amount of Social Security benefits in the event of retirement, disability, or survivorship. SSA officials told us that the agency does not use existing information to assess the extent to which coverage errors are occurring and the risk that these errors pose to the accuracy of public employer wage reporting. According to Standards for Internal Control in the Federal Government, risk assessment is the identification and analysis of relevant risks associated with achieving the agency’s objectives. SSA has many internal and external sources of information it could use to assess the risks of inaccurate coverage of public employees. However, SSA headquarter officials told us that SSA may not be aware of all errors or related factors that regional offices address, unless they are elevated to headquarters for assistance. SSA officials in headquarters and regional offices generally told us that SSA in recent years has not routinely shared experiences across regions, including lessons learned from coverage errors and factors that contribute to them. For example, one SSA regional office helped resolve a coverage problem that involved a consolidation of a state’s capital city and the county in which it was located. Because the public safety officers of the city were not covered while the public safety officers of the county were covered, the consolidation had the potential to change the Social Security coverage of some public safety positions. Under current budgetary pressures, some states are considering or pursuing similar consolidations to reduce costs; however, SSA headquarters did not share lessons learned from this example with other regions so that they could be better prepared to address similar issues in the future. SSA headquarters also does not routinely review internal legal opinions—known as coverage determinations—or modifications that SSA regional offices have approved to correct coverage errors. SSA officials told us that they have not analyzed such information in a systematic approach to identify any patterns or common issues. Also, SSA officials in 8 of 10 regions told us that IRS does not typically share the results of its enforcement activities, and IRS officials agreed. As a result, SSA is not always aware of the coverage errors that IRS finds during examinations and compliance checks. SSA hosted a conference in April 2010 with IRS and state administrators to explore options for improving how coverage agreements are administered. Based on this conference, SSA identified possible proposals to reduce the complexity of public employees’ coverage, including the potential for universal coverage. It also formed 11 committees consisting of SSA and state or IRS officials. Each week, at least one committee is supposed to meet, and quarterly conference calls are planned for all participants to discuss their progress starting in September 2010. According to SSA, two committees are of the highest priority: the committee to improve training of federal, state, and local governments, as well as the committee on policies and procedures. A list of the 11 committees and their objectives is in appendix V. Since 1987, IRS has been the primary agency responsible for ensuring that public employers are accurately paying Social Security and Medicare taxes, and its level of enforcement has increased over the years. According to IRS officials, IRS performed limited enforcement work during the first 10 years after they became responsible for receiving public employer Social Security taxes. In 1997, IRS started a state and local government compliance initiative to provide outreach to public employers. In fiscal year 2000, IRS created the Federal, State and Local Governments office (FSLG) to facilitate more accurate reporting and collection of Social Security and Medicare taxes by public employers, among other activities. Initially, FSLG allocated most of its time to educational activities, but in fiscal year 2004 began to focus more on enforcement activities. IRS’s enforcement program consists of compliance checks and examinations. IRS reviews selected employers each year, based partly on its workload and staff availability. A compliance check is a method of reaching out to public employers, and is intended to be educational. Compliance checks review public employer tax returns and are typically less detailed than an examination. Generally, compliance checks are performed on smaller public employers, partly to allocate IRS enforcement resources. By conducting compliance checks on smaller employers, IRS can review and educate a greater number of public employers, while still allocating staff time and resources to conduct more time-consuming examinations on larger, more complex public employers. For compliance checks, IRS completes a checklist of selected employment tax areas. Our review of the checklist found that it includes four questions about Social Security coverage agreements: (1) Does the taxpayer have an agreement? (2) Does the taxpayer have a copy of the agreement? (3) What are the number, date, and description of the modification to the agreement? (4) What categories of workers are excluded from Social Security coverage? If issues are found during the compliance check, IRS provides the employer with a discrepancy letter identifying problems to be resolved. We reviewed a nongeneralizable sample of 20 compliance checks completed in fiscal year 2009 that IRS identified as having issues related to Social Security coverage agreements. In 11 of these cases, the public employer was not covered under the state’s Social Security coverage agreement. In 6 of the other cases in which the state or local government employer was actually covered under the state’s coverage agreement, IRS found that the employer did not have a copy of its modification and in one of these cases, the employer did not know one was in effect. In another case, a school district that was covered under its state agreement dissolved, and then combined with another school district that also was subject to a modification. The school district being reviewed was not certain if the coverage agreement was still in effect and planned to contact the state Social Security administrator to determine if a new modification was necessary. IRS also has the authority to conduct examinations of public employers’ records to determine the correct tax liability. Unlike compliance checks, examinations are in-depth, formal audits that may result in a tax assessment. Examinations review many areas, including proper Social Security withholding, fringe benefits, and public retirement systems. For each examination, the IRS examiner is supposed to obtain information about the applicable Social Security coverage agreement and determine the employees that are covered. In making its coverage determination, IRS examiners have to review employer records and may informally contact the state administrators and SSA. Figure 7 shows the basic procedures IRS uses to determine if public employees are covered by Social Security or Medicare. Generally, examinations are performed on larger public employers, and took an average of almost 9 months in fiscal year 2009 to complete. If errors are found, IRS can either make a tax assessment for the amount owed by the employer or, among other things, refund an overpayment. Generally, IRS does not provide information about its enforcement activities to SSA or state administrators. IRS is subject to statutory provisions that generally prevent it from disclosing taxpayer information unless there is an exception authorizing disclosure in the law. One such exception is for purposes of administering certain portions of the Social Security Act, in which case the information can be disclosed to SSA upon a written request. The MOU between IRS and SSA states that it serves as such a request, but IRS still does not generally tell SSA about its examinations and compliance checks because, according to IRS officials, many of its examiners are not aware of the MOU. According to IRS officials, state administrators do not have an exception to the disclosure requirements so the agency is prevented from providing information to them. IRS receives limited information about public employers’ Social Security coverage. Employers are generally required to submit quarterly tax returns to IRS providing information on wages and Social Security and Medicare taxes paid. According to an IRS official, IRS started to receive copies of coverage modifications from SSA around fiscal year 2000, but IRS generally does not distribute copies of the modifications to all field offices. To obtain a complete set of modifications, IRS officials in one field office told us they went to the SSA regional office and duplicated them. Although some IRS offices lack a complete set of modifications, the agency maintains a database of public employers and over half of these employers are designated as being covered under a Social Security coverage agreement. To increase its knowledge about state and local government employers’ Social Security coverage, in 2009, IRS developed an assessment document designed to identify states with potential coverage problems. The assessment document is filled out by IRS officials and the state administrator and is intended to capture general information such as the name of the state administrator and staff, and the applicable SSA and IRS officials responsible for that state. The assessment also requests the number of modifications and if the state maintains a list of employers covered under its coverage agreement. Ultimately, IRS plans to use the information obtained to identify states needing outreach and education. By October 2009, IRS had developed a draft document and later obtained and incorporated input from SSA and NCSSSA officials. IRS pilot tested it in January 2010 and, according to an IRS official, started using the document in all states in July 2010. IRS officials noted that they intend to use the document as the basis for continued communication, outreach, and enforcement. In addition, from 2008 to 2010, an advisory committee to IRS developed a detailed self-evaluation document for public employers to assess their own compliance. The self-evaluation document expands on the IRS checklist used in compliance checks to include understandable information on employment tax requirements, including Social Security and Medicare taxes. IRS plans to refine and post the document on its Web site by the end of 2010 in an attempt to enhance voluntary compliance by public employers. In 2006, the Treasury Inspector General for Tax Administration (TIGTA) issued a report that reviewed IRS’s FSLG workload selection process and identified issues related to tracking the effectiveness of the indicators used to select cases for review and to analyzing the results of compliance checks. IRS utilizes 14 indicators to select cases for review from over 103,000 state and local government employers. One indicator is used to identify issues related to Social Security coverage by computing the ratio of Social Security wages to total wages paid. Under this computation, a lower ratio of Social Security wages to total wages increases the chances that an employer is selected for review. However, a low ratio may not always indicate noncompliance with the state’s Social Security coverage agreement. For example, a Social Security coverage agreement may not include some employees and would result in a lower ratio of Social Security wages to total wages paid. TIGTA found that IRS was not systematically analyzing the effectiveness of its selection process. The TIGTA report said that, with this information, IRS could identify more productive indicators and provide baseline measures of the levels of noncompliance identified. IRS officials told us that they are currently conducting a special analysis of the indicators used for its examinations and compliance checks conducted in 2006, 2007, and 2008, and hope to complete this analysis by 2011. In 2006, TIGTA also found that IRS was not analyzing the results of completed compliance checks to identify common issues found during reviews, and our recent work found that IRS still does not routinely conduct such analysis. For compliance checks, IRS tracks the number of employers that were issued a discrepancy letter, but not the number that had issues related to Social Security coverage. In fiscal years 2007 to 2009, IRS issued discrepancy letters to over 79 percent of the public employers that had a compliance check. However, IRS does not know what percent of the employers did not comply with their state’s Social Security coverage agreement. In 2009, IRS performed a special analysis of its 2008 compliance checks to determine the issues found during the year. IRS found that 4.1 percent of all of its closed compliance checks had Social Security coverage issues. In 2006, TIGTA concluded that by analyzing the results of its compliance checks, IRS could identify common issues and focus its work for future compliance checks. IRS is currently conducting a special analysis of the results of its compliance checks, as well as its examinations conducted in 2006, 2007, and 2008. It plans to use this information and information from other special projects to identify the most common areas of noncompliance. IRS will then provide focused outreach to state and local government employers to address these areas. This outreach could include publishing articles in the IRS newsletter or other industry journals. IRS officials told us that they anticipate completing this analysis by 2011. Table 3 provides information on the number of compliance checks completed and discrepancy letters issued in fiscal years 2007 to 2009. For examinations, FSLG tracks the number of cases that resulted in an adjustment to the employers’ taxes, but does not know if such tax adjustments are due to errors with Social Security coverage agreements. FSLG officials told us they do not yet know the prevalence of coverage problems and have not done enough audits to fully understand the extent of the problems. We requested the closed examinations for fiscal year 2009 that had issues related to Social Security coverage agreements. FSLG officials stated that due to constraints in their information system, they could not identify all of these cases and, at best, could provide a list of examinations that might indicate Social Security coverage agreement issues using the amount of wage adjustments. We selected and reviewed a sample of 10 closed examinations provided by IRS that had large wage changes. In 5 of these examinations, the public employer did not have an error related to its coverage agreement. In 3 of the other 5 cases in which errors were found with coverage agreements, the public employer misclassified the employees for whom it was not paying Social Security taxes. For example, some Social Security coverage agreements exclude certain categories of employees, such as student workers. In one of these cases, IRS conducted an examination of a public employer with student workers and determined that some of the employees classified as students were not actually taking classes at the time. As a result, IRS found that the employer was responsible for paying Social Security and Medicare taxes for these employees. The following table provides information on the number of completed examinations and the number of cases with errors in fiscal years 2007 through 2009. In fiscal years 2007 to 2009, over 89 percent of employers examined had tax adjustments, but the reasons for those tax adjustments are not tracked. In 2009, IRS issued a report on community colleges that provides an indication of how well some state and local government employers were following their state’s coverage agreements. The primary objective of the report was to measure the compliance level of community colleges and identify specific issues of noncompliance. IRS selected a random sample of 88 community colleges for examination. Although the community college special project results cannot be applied to all public employers, IRS found that 10 percent of the 88 employers reviewed incorrectly excluded workers who should have been covered by their state’s Social Security coverage agreements. SSA and IRS do not currently have the information needed and procedures in place to effectively and efficiently provide oversight of Social Security coverage for public employees. When IRS began collecting and overseeing the accuracy of the taxes collected in 1987, SSA ceased key monitoring activities that could help ensure states and public employers are following the states’ agreements for Social Security coverage. Ensuring the accuracy of the Social Security records for public employees is still a requirement for SSA, and should be a priority for the managers of SSA and IRS. At present, SSA and IRS managers do not know the extent to which wages are reported accurately or to which Social Security taxes are paid in accordance with program rules. States can also play a vital role in the oversight structure of Social Security coverage for public employees, but lack clear guidelines with specific responsibilities to ensure state participation. Absent additional management attention and a system to monitor the accuracy of public employer wage reporting, Social Security benefits and tax payments may be inaccurately reported. Without a coordinated monitoring process between SSA and IRS to make sure that public employers are complying with state coverage agreements, opportunities to identify and correct errors will be lost. Given the projected fiscal challenges of the Social Security program in the coming decades, every attempt should be made to assure coverage is correctly applied so that employers and employees are reporting earnings and paying taxes when required to do so. To improve SSA’s management oversight of retirement benefits for public employees, we recommend that the Commissioner of Social Security, in consultation with IRS, state administrators, and public employers, develop procedures for monitoring the accuracy of Social Security earnings records. This could include (1) improving data collected on public employers, (2) identifying risk factors using existing SSA information and IRS audit findings, and (3) targeting public employers with those risk factors for follow-up reviews on an ongoing basis. To improve the states’ administration of public employer wage reporting, we recommend that the Commissioner of Social Security, in consultation with the National Conference of State Social Security Administrators, modify SSA’s policy guidance to clarify state responsibilities governing their oversight of public employers and set clear expectations for the steps state administrators should take in implementing these responsibilities. To improve the process for identifying and correcting errors, we recommend that the Commissioner of Internal Revenue track errors found through its compliance efforts on Social Security and Medicare taxes and share results with SSA, to the extent permitted by federal law. We provided a draft of this report to the Social Security Administration and the Internal Revenue Service. In its written response, reproduced in appendix VI, SSA stated that our report fairly represented the key players involved in the administration of Social Security coverage agreements and provided a balanced representation of the issues. SSA generally agreed with all of our recommendations, but suggested that we reword our first recommendation to clarify the duties of the respective agencies. SSA also stated that IRS should collect data on employees covered under Section 218 agreements. We changed the language in the recommendation to clarify that SSA should monitor the accuracy of Social Security earnings records and highlighted that existing Social Security information as well as IRS audit findings may be useful in developing risk factors. While we believe that any monitoring effort should be coordinated with IRS and other stakeholders, our recommendation is intended for SSA to take the leadership role in such an effort. As we note in the conclusion above, SSA holds the primary responsibility of ensuring accurate Social Security records for public employees. SSA also provided technical comments that were incorporated into this report as appropriate. In its written response, reproduced in appendix VII, IRS stated that our report made an important contribution to the concept of ensuring compliance with coverage agreements. IRS agreed with our recommendation that it should track errors found through its compliance efforts on Social Security and Medicare taxes and stated that it has begun identifying and tracking such errors. IRS also stated that it will ensure that information applicable to these errors is shared with SSA to the extent allowable by the Internal Revenue Code. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to relevant congressional committees. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix VIII. To obtain information on how the Social Security Administration (SSA) ensures accurate coverage of public employees, we interviewed SSA officials in Headquarters and in all 10 Regional Offices. We asked officials about the roles and interactions of SSA, state administrators, public employers, and the Internal Revenue Service (IRS). We asked about SSA’s data, educational outreach, and oversight, as well as how coverage errors are detected and corrected. We reviewed relevant federal laws and regulations. We also reviewed documentation from SSA, such as policies and procedures, training, Inspector General reports, the Memorandum of Understanding (MOU) between SSA and IRS, and meeting minutes since fiscal year 2004 of the joint SSA-IRS committee. To understand the coverage agreement process, we reviewed selected original agreements, modifications (i.e., amendments) that provide coverage to public employees, internal legal opinions known as coverage determinations, and documents on specific coverage errors such as the report of the Federal Section 218 Task Force for Missouri School Districts. To provide background information on the number of covered state and local government employees and the amount of covered earnings, we requested data from SSA on covered state and local government employment from 2007—the most recent year for which data were available. Specifically, we requested the number and percent of state and local government workers with and without Social Security coverage in each state. We also requested the amount of earnings (i.e., wages) of state and local government workers that were covered and not covered in each state. SSA’s Office of Research, Evaluation and Statistics used its 1 percent sample of Social Security numbers, which is generalizable to the universe of workers. The sample contains earnings data that employers report to SSA on Form W-2. The data do not specify the source of coverage, such as coverage agreements under section 218 or the provisions under section 210 of the Social Security Act. For the purposes of our tables, the data assume that state and local government workers do not have other, nonpublic employment. To assess the reliability of the data, we reviewed relevant documents and interviewed knowledgeable SSA officials. On the basis of this information, we determined that the data for 2007 were sufficiently reliable for the purposes of our review. To provide information on how many modifications to the coverage agreement SSA has approved by state, we requested the number and year of the most recently approved modification for each state. From SSA, we requested that the 10 regional offices provide the number and date of the amendment (i.e., modification) most recently approved by SSA as of January 1, 2010. From states, we requested the same information through our Web-based survey. We then compared the results and performed follow-up work, where needed. We also reviewed relevant documents and interviewed knowledgeable SSA and state officials about the process to approve modifications for coverage. Based on these steps, we determined that the data we specially requested on the number and year of the last approved modification were sufficiently reliable for the purposes of our review. To understand the role of states in ensuring accurate coverage, we visited four states—California, Colorado, New Hampshire, and Rhode Island. We selected these states to provide a variety of experiences, based on the percent of covered employees, geographic dispersion, and indicators or referrals from SSA or the National Conference of State Social Security Administrators (NCSSSA) of how active the state administrator is. During our site visits, we interviewed the state officials who administer the state’s coverage agreement with SSA. We asked about the role of the state administrator, the practices to administer the coverage agreement, as well as staffing and funding to do so. We also asked about interactions with SSA and IRS. We reviewed documents from states, such as policies and procedures, and select parts of the coverage agreement. We did not review state laws or verify information pertaining to state laws that were given to us in the course of our work. We also conducted interviews and obtained documents from officials of the NCSSSA. To obtain further information on states administering Social Security coverage agreements, we conducted a Web-based survey that was sent to state administrators in all 50 states, Puerto Rico, and the Virgin Islands. The survey was conducted between January and February 2010 and had a response rate of 100 percent. The survey included questions about the characteristics of states’ coverage agreements, the extent to which state administrators conduct activities to manage these agreements, as well as the challenges state administrators face in administering these agreements. Because this was not a sample survey, there are no sampling errors. However, the practical difficulties of conducting any survey may introduce nonsampling errors, such as variations in how respondents interpret questions and their willingness to offer accurate responses. We took a number of steps to minimize nonsampling errors. For example, a social science survey specialist designed the questionnaire in collaboration with GAO staff with subject matter expertise. As part of survey development, we received feedback from NCSSSA. The questionnaire also underwent a peer review by a second GAO survey specialist. We also pretested the questionnaire with appropriate officials in four states—Colorado, Florida, Indiana, and Nevada—to ensure that the questions and information provided to respondents were appropriate, concise, and clearly stated. We selected pretest states based on variation in the percentage of covered public employees, geographic dispersion, and the level of state administrator involvement identified by NCSSSA officials. The pretesting took place during November and December 2009 by telephone. Since these were Web-based surveys, respondents entered their answers directly into electronic questionnaires. This eliminated the need to have data keyed into databases, thus removing an additional source of error. Finally, to further minimize errors, computer programs used to analyze the survey data were independently verified by a second GAO data analyst to ensure the accuracy of this work. While we did not validate specific information that administrators reported through our survey, we reviewed their responses and took steps to determine that they were complete, reasonable, and sufficiently reliable for the purposes of this report. For example, during pretesting, we took steps to ensure definitions and terms used in the survey were clear and familiar to the respondents, categories provided in closed-ended questions were complete and exclusive, and the ordering of survey sections and the questions within each section were appropriate. In our review of the data, we also identified and logically fixed skip pattern errors—questions that respondents should have skipped but did not. On the basis of our checks, we believe our survey data are sufficient for the purposes of this report. To understand how IRS identifies incorrect Social Security taxes for public employees, we held interviews with IRS managers in the Federal, State and Local Governments office (FSLG), which is responsible for the tax compliance of federal, state, and local government employers, including their Social Security coverage. We asked FSLG officials about how IRS selects state and local government employers to review, performs examinations and compliance checks, corrects any errors in coverage and taxes, and interacts with SSA and states. We reviewed relevant federal laws and regulations. In addition, we reviewed relevant documents, including policies and procedures, training materials, criteria to select employers for review, the MOU between SSA and IRS, reports from special projects, and publicly available forms and publications. We obtained IRS data on enforcement activities it conducted between fiscal years 2007 and 2009, including examinations and compliance checks completed in each state, and the results of these enforcement activities. For examinations, IRS provided information about whether the examination resulted in a tax adjustment. For compliance checks, IRS provided information about number of cases that resulted in a discrepancy letter. We reviewed documents and contacted knowledgeable IRS officials about the data. For the purposes of our review, we determined these data were sufficiently reliable. To understand how IRS identifies Social Security errors for public employees, we reviewed a judgmental sample of FSLG audit files for 10 examinations and 20 compliance checks of state and local government employers that were completed in fiscal year 2009. Because IRS does not track this information, we asked FSLG to provide lists of examinations and compliance checks with an indication of noncompliance for Social Security coverage. IRS officials told us that the indications of noncompliance, particularly for examinations, are imperfect. For example, IRS examiners may not consistently use the codes to denote noncompliance related to Social Security coverage agreements. Because examinations are in-depth reviews that may result in changes to reported earnings and taxes, we selected 10 of 34 examinations with larger increases and decreases of Social Security or Medicare earnings. For compliance checks, IRS identified 20 closed compliance checks that found issues with approved Social Security coverage. We selected all of these cases for our review. We reviewed the files to gather information on how IRS detected errors, what the errors were, and how they were resolved. The review of these files is for illustrative purposes and is not generalizable to all state and local government employers. We conducted this performance audit from July 2009 to September 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Estimated total of covered and noncovered earnings 2,769 (Dollars in millions, rounded to nearest million) Other includes American Samoa, Guam, Northern Mariana Islands, and U.S. Virgin Islands. Although most Social Security coverage of state and local government employees is obtained through coverage agreements, additional Social Security provisions affect the coverage of other state and local government employees. For example, section 210 of the Social Security Act extends mandatory coverage for Social Security and Medicare to state and local government employees who are not members of a qualifying retirement system, subject to certain exceptions. Number of last approved modification Consistent with the numbering sequences of SSA and states, the table excludes a state’s original agreement. The original agreement is not counted as a modification because it is not an amendment to the agreement. Administer and maintain the section 218 agreement that governs voluntary Social Security and Medicare coverage by public employers. Prepare modifications to the section 218 coverage agreement to include additional coverage groups, correct errors in other modifications, identify additional public employers that join a covered retirement system, and obtain Medicare coverage for public employees whose employment relationship with a public employer has been continuous since March 31, 1986. Provide SSA with notice and evidence of the legal dissolution of covered state and local public employers. Conduct referenda for Social Security and Medicare coverage for services performed by employees in positions under a public Resolve coverage and tax questions associated with Section 218 agreements and modifications with SSA and IRS. Advise public employers on Social Security, Medicare, and tax withholding matters. Provide information to public employers as appropriate in accordance with the state’s enabling legislation, policies, procedures, and standards. Provide advice on Section 218 optional exclusions applicable to the state and/or individual modifications, and advice on state and local laws, rules, regulations and compliance concerns. Maintain physical custody of the state’s Section 218 agreement, modifications, dissolutions, and intrastate agreements. Appendix V: List of Committees Formed by SSA at Its April 2010 Conference Develop and implement ways to improve interagency relationship and collaboration. Recommend uniform procedures for the regions and state administrators. Research policies and recommend improvements. Develop ideas that will improve a centralized database. Suggest and develop training materials that will help new state administrators learn the position. Improve succession planning procedures. Improve training in all levels of federal, state, and local government by creating joint training sessions. Research to identify areas of policy or procedures that may be improved. Raising Awareness for State Elected Officials Develop and explore ways to strengthen agency relationships with state-elected officials. Review staffing issues in the regions and states and recommend solutions. Discuss disclosure limitations. SSA identified these committees as the highest priorities. Blake Ainsworth, Assistant Director; Richard Harada, Matthew Saradjian, Anjali Tekchandani, Kris Trueblood, James Bennett, Susannah Compton, Alex Galuten, Stuart Kaufman, Wayne Turowski, and Walter Vance made significant contributions to this report. | In 2007, 73 percent of state and local government employees were covered by Social Security. Unlike the private sector where most employees are covered by Social Security, federal law generally permits each public employer to decide which employees to cover. The Social Security Administration (SSA) is responsible for facilitating Social Security coverage for these employers through agreements with states. SSA is also responsible for maintaining accurate earnings records, while IRS is responsible for ensuring Social Security taxes are paid. Because of the need to ensure Social Security coverage is administered accurately, GAO was asked to review (1) how SSA works with states to approve Social Security coverage and ensure accurate coverage of public employees, and (2) how IRS identifies incorrect Social Security taxes for public employees. GAO reviewed procedures of federal agencies and selected states; surveyed all state administrators; and reviewed IRS case files. Although SSA approves Social Security coverage on behalf of state and local government employers, it faces challenges in ensuring accurate reporting of Social Security earnings. SSA works with states to establish and amend Social Security coverage agreements, but public employers do not always know that SSA's approval is required. For example, a small fire district in one state reported Social Security wages for more than a decade without approved coverage to do so, not realizing a coverage agreement between SSA and the state was required. While state administrators are responsible for managing the approved coverage agreements for public employers, SSA's guidance does not specify how states should go about fulfilling this responsibility, leading to variation in the extent to which states meet their responsibility. SSA lacks basic data on which public employers have approved coverage and relies on public employers to comply with coverage agreements voluntarily. SSA officials told us that the agency does not use existing information, such as lessons learned from prior coverage errors, to assess the risks that these errors pose to the accuracy of public employer wage reporting. IRS conducts compliance checks and examinations of public employers; however, examining Social Security coverage for employees is challenging due to limited data and the difficulties of determining whether employees are covered. To obtain needed data, one IRS field office sent its examiners to the SSA regional office to make copies of Social Security coverage agreements. Some other IRS field offices do not have copies of all their respective agreements. IRS tracks the results of its examinations to identify the number of public employers that need tax adjustments; however, IRS does not track whether the tax adjustments relate to Social Security coverage agreement errors even though this information is available during examinations. SSA could benefit from such information so that it could help public employers identify and correct errors. As a result, IRS's and SSA's ability to fully understand problems related to Social Security coverage is limited. GAO recommends that SSA work with IRS, state administrators, and public employers to improve management oversight and monitoring of public employer reporting of Social Security wages and that SSA clarify its guidance on state administrator responsibilities. GAO also recommends that IRS track errors found through compliance efforts and share results with SSA to the extent permitted by law. SSA and IRS reviewed the report and agreed with the recommendations. |
Reclamation has carried out its mission to manage, develop, and protect water and related resources in 17 western states since 1902. The agency has led or provided assistance in constructing most of the large dams and water diversion structures in the West for the purpose of developing water supplies for irrigation, as well as for other purposes, including hydroelectric power generation, municipal and industrial water supplies, recreation, flood control, and fish and wildlife enhancement. Reclamation is organized into five regions, with technical and policy support provided by its central office in Denver. Each regional office oversees the water projects located within its regional boundaries (see fig. 1). The federal statutes authorizing individual water projects and the statutes generally applicable to all water projects—known collectively as reclamation law—govern Reclamation’s water projects. Reclamation law determines how the costs of constructing water projects are allocated and how repayment responsibilities are assigned among the projects’ users. Cost allocation is the process of assigning an equitable share of the total cost to each use in a multipurpose project. Under reclamation law, Reclamation allocates a share of the project’s total construction costs to each of the authorized project purposes based on the proportion of benefits each purpose receives from the project, and the costs allocated to each purpose are deemed to be reimbursable or nonreimbursable. Reimbursable costs are those that are to be repaid by certain water users, including irrigation districts, power, and municipal and industrial water suppliers. Nonreimbursable costs are those that are not repaid by water users and are instead generally borne by the federal government because certain project purposes are viewed by Congress as being national in scope, such as costs allocated to flood control and navigation, fish and wildlife enhancement, and recreation. At the time each water project is authorized and designed, Reclamation estimates the total construction costs and allocates these costs among the project uses. Once project construction is completed, and the actual construction costs are determined, Reclamation performs a final construction cost allocation. The cost allocation serves as a basis for the repayment terms in water users’ contracts. The amount of reimbursable costs that water users are responsible for repaying is based on the type of project purpose (see fig. 2). Power and municipal and industrial users are responsible for repaying their allocated share of the construction costs, plus the interest that accrues on those costs during construction and the repayment period. For irrigation districts, however, reclamation law does not require the districts to pay interest on the construction costs allocated to irrigation, resulting in federally subsidized financing for irrigation districts responsible for repayment. In addition, irrigation districts may receive the following two types of financial assistance in repaying their allocated construction costs: Irrigation assistance. The amount of construction costs allocated to irrigation that the Secretary of the Interior determines to be above the irrigation districts’ ability to pay for a given project is repaid from other revenue sources, where available. These other revenues are primarily earned from the sale of power generated by the project (or other related projects), or from the sale of municipal and industrial water, among other revenue sources. Ability-to-pay determinations are based on Reclamation’s financial analysis of a given geographic area, and determinations generally occur before construction begins on a project. Credits. Credits can relieve part or all of irrigation districts’ repayment obligations. Types of credits include congressionally authorized repayment reductions, or “charge-offs,” and construction expenses determined to be nonreimbursable. Charge-offs are credits that are often enacted through legislation in response to special circumstances, such as a determination that the land is unproductive, or the settlement of Indian water rights claims. To establish an agreement between the federal government and irrigation districts on the delivery of water from a project and to collect payments, Reclamation generally enters into one of the following two types of contracts with irrigation districts: Repayment contracts: Section 9(d) of the Reclamation Project Act of 1939 authorizes permanent contracts for water delivery with repayment of construction costs allocated to irrigation to be paid in fixed dollar amounts in annual or other regular increments, over a period of up to 40 years, by the irrigation district to Reclamation. Water service contracts: Section 9(e) of the Reclamation Project Act of 1939 authorizes contracts to furnish water for irrigation purposes for up to 40-year periods. Reclamation generally enters into water service contracts with irrigation districts when construction of the water project has not been completed, final construction costs are uncertain, or the irrigation district does not want a permanent contract, among other reasons. By law, Reclamation must charge rates for water delivered under water service contracts that are at least sufficient to cover an appropriate share of fixed charges the Secretary of the Interior deems proper, taking into consideration the construction costs allocated to irrigation, as well as an appropriate share of annual operation and maintenance costs. A water service contract can contain a provision providing for its renewal—through negotiations between Reclamation and the irrigation district—once the contract’s term ends, or the contract may contain a provision allowing for its conversion to a repayment contract. Depending on the size of the water project, which varies substantially across projects, Reclamation may have contracts with a number of irrigation districts within that project’s service area. Irrigation districts then enter into separate agreements with landholders to provide project water.geographic area, Reclamation may have only one or two contracts with irrigation districts for that water project, which provides water to a small number of landholders. On the other hand, for water projects covering a larger area, Reclamation may have contracts with multiple irrigation districts servicing hundreds of landholders within a project. Reclamation collects data on water project construction costs and the status of repayment by irrigation districts, but it has not publicly reported this information since the 1980s. Reclamation’s regional offices collect repayment data annually for each water project with an outstanding construction cost repayment obligation and then compile them in Statements of Project Construction Cost and Repayment (repayment statements). These repayment statements indicate that $1.6 billion of the $6.4 billion in costs allocated to irrigation was outstanding, as of the end of fiscal year 2012. It is Reclamation policy to make the repayment statements available to the public upon request, but it could better promote to the public that it prepares repayment statements annually and that these statements are available. Reclamation’s data on water project construction cost repayments indicate that, of the $6.4 billion in costs allocated to irrigation as of the end of fiscal year 2012, $1.6 billion remains outstanding. Every fiscal year, Reclamation’s five regional offices collect repayment data and compile them in repayment statements for each water project that has These repayment construction costs with repayments outstanding.statements include data on the total construction costs for the water project; the construction costs allocated to each project purpose, including irrigation; repayment information for costs allocated to each project purpose, including the amount irrigation districts have repaid as of the end of the fiscal year; and any financial assistance granted to irrigation districts. Reclamation prepared repayment statements for 43 of these 54 projects in fiscal year 2012, which indicate that the total construction cost for these projects was more than $963.3 million, of which at least $350.5 million was allocated to irrigation. Reclamation did not prepare repayment statements for the other 11 projects or otherwise have construction cost and repayment information readily available. Per Reclamation policy, it is optional to prepare repayment statements for water projects where all water users, including irrigation districts, have repaid their construction cost allocations. that the total construction cost for these projects was more than $19.7 billion, of which $6.4 billion was allocated to irrigation (see fig. 3). According to Reclamation’s repayment statements, as of the end of fiscal year 2012, of the $6.4 billion in construction costs allocated to irrigation, the outstanding repayment obligations totaled $1.6 billion—or 25 percent—after accounting for nearly $4.8 billion in repayments made by irrigation districts, other repayments received, and financial assistance to irrigation (see table 1). Outstanding repayment obligations ranged across Reclamation’s regions, from approximately $91.7 million in the Upper Colorado region to more than $1.0 billion in the Mid-Pacific region (accounting for 64 percent of the total outstanding construction costs allocated to irrigation). Reclamation’s repayment statements as of the end of fiscal year 2012 further show that, of the $1.6 billion outstanding repayment obligation for irrigation, irrigation districts are expected to repay approximately $1.1 billion through repayment or water service contracts. Of the remaining $490.4 million, approximately $287.4 million is expected to be recovered through other revenue sources, such as the sale of surplus project water for irrigation, and roughly $203.0 million is being repaid, pursuant to federal law, a settlement agreement, and stipulated judgment, by a municipal corporation that operates and maintains the Central Arizona Project. Irrigation districts have repaid nearly $1.4 billion of their allocated costs primarily through repayment or water service contracts as of the end of fiscal year 2012 and, according to Reclamation officials, irrigation districts are generally current in their repayments.Reclamation officials, across the 76 water projects with outstanding repayment obligations, Reclamation holds 72 repayment contracts for irrigation and 304 water service contracts for irrigation. We found that, across Reclamation’s regions, the number of water projects with outstanding repayment obligations as of the end of fiscal year 2012 and the types of contracts vary, as described in table 2. Reclamation has not publicly reported the information it collects on water project construction costs and repayment since the 1980s, and we found that Reclamation does not make it readily known to the public that it prepares repayment statements annually or that they are available.Reclamation officials said that the purpose of the repayment statements is generally for internal management use, such as when the agency is preparing for contract negotiations, or to provide information to certain power users on the amounts of irrigation assistance power may be responsible for paying. Reclamation officials told us they had considered publishing the repayment statements on the agency’s website in the mid- 2000s as part of an internal management review, but they decided not to do so. Instead, in 2007, Reclamation developed an internal policy document on the preparation of repayment statements that states that such statements will be provided to any interested party upon request. This policy document is posted on the section of the agency’s website that contains program and administrative policies that apply to Reclamation’s management of its water projects. Information on the availability of the repayment statements is not otherwise posted online or made public. We interviewed staff from legislative branch agencies and several other individuals knowledgeable about Reclamation water projects who indicated that public availability to the information contained in the repayment statements would be helpful. Some individuals we interviewed were not aware that Reclamation prepares repayment statements annually, or that the agency would make them available upon request. Several individuals we interviewed indicated that making the repayment statements directly accessible on the agency’s website would be helpful and, in some cases, better inform their work. For example, a staff member from the Congressional Research Service said that to be able to respond to congressional committee requests in a timely manner, it would be helpful to have repayment information on the agency’s website, similar to information posted online by Reclamation’s Mid-Pacific regional office on its water rates (which are based in part on construction cost allocation and repayment information) for the Central Valley Project. In addition, some individuals noted that, as Reclamation considers modifying or expanding existing water storage capacity or delivery, Congress and others may want to assess information on how costs were allocated and how funding and repayment arrangements were established in the past to inform potential future funding arrangements. For instance, an environmental consultant told us that having repayment information readily accessible for water projects developed in the past would help inform decisions on future funding arrangements and other policy considerations for federal, state, and other parties considering the expansion of a water project in the Pacific Northwest. A senior Reclamation official we interviewed agreed that increasing public awareness that cost allocation and repayment information is available upon request could better position the public to obtain information that could help inform their decision making on related water project issues. In addition, the official stated that there may be additional opportunities to make the public aware of its policy beyond posting the information on the policy section of its website. According to the Office of Management and Budget’s open government directive, the federal government should publish information online about what the government is doing to promote transparency, accountability, and informed participation by the public, and federal agencies should proactively use modern technology to disseminate useful information. By further disseminating information to the public that cost allocation and repayment data are available through the repayment statements, Reclamation would promote transparency and potentially increase informed participation by the public. The authority for irrigation districts, or for landholders within those districts, to repay their allocated construction costs early is limited to a small number of districts across Reclamation’s water projects. Based on our analysis, early repayment affects the financial return to the federal government, and it accelerates the elimination of certain restrictions and requirements for landholders that are in place until their repayment obligations are fulfilled, among other things. Reclamation and irrigation district officials told us that early repayment may not appeal to many districts or landholders, but some districts or landholders may be incentivized to seek and exercise the authority to repay early, depending on their particular circumstances. The authority for irrigation districts, or for landholders within those districts, to repay their allocated water project construction costs early— that is, repay outstanding repayment obligations, either through lump-sum or accelerated payments, in advance of the date specified in the districts’ contracts—is limited. Unless expressly authorized in their contracts or by statute, irrigation districts and landholders are not authorized to repay their construction cost obligations early. According to Reclamation data, of the estimated 585 irrigation districts that had repayment or water service contracts with Reclamation, as of December 2013, 87 districts— or about 15 percent—had authority for the district, or for landholders within the district, to repay their construction cost obligations early. Of those 87 irrigation districts, 69 districts exercised their authority and repaid early, or had some landholders who repaid early, as of December 2013, with early repayments totaling more than $238.9 million, according to Reclamation data. Contractual authority for early repayment is limited because only a small number of contracts that predate the Reclamation Reform Act of 1982— which prohibited new contracts after October 12, 1982, from authorizing early repayment—contain terms expressly authorizing early repayment.Reclamation data indicate that of the 87 irrigation districts with early repayment authority, 55 districts had contracts that authorized landholders to repay their outstanding construction cost obligations early; these districts are located largely in the Pacific Northwest region. Some or all landholders within 39 of those 55 irrigation districts exercised this contractual authority and made early repayments totaling approximately $18.7 million as of December 2013,app. IV). according to Reclamation data (see In addition, we identified seven statutes enacted since 2000 that authorize some irrigation districts—or, in some cases, landholders within those districts—to repay their construction cost obligations early. Specifically, we identified 32 irrigation districts that sought and received statutory authority for early repayment by the district or landholders.analysis of Reclamation data shows that, of those 32 irrigation districts with statutory authority, 30 districts repaid early or had some landholders within the district who repaid early, with their early repayments totaling $220.2 million, as of December 2013 (see app. IV). Twenty-two irrigation districts that receive water from the Central Valley Project and received statutory authority in 2009 comprised most of those early repayments, totaling nearly $200.1 million. Early repayment affects the financial return to the federal government and accelerates the elimination of certain restrictions and requirements for landholders that are in place until their repayment obligations are fulfilled. While only a limited number of irrigation districts and landholders have early repayment authority, there has been consideration in Congress of expanding early repayment authority more broadly, such as to all irrigation districts. Reclamation documents and officials we interviewed indicated that the agency has and would likely continue to support additional authorization for early repayment, so long as the financial return to the federal government was not negatively affected, but that the unique aspects of most water projects support authorizing early repayment on a case-by-case basis. Reclamation officials and irrigation district officials told us that early repayment may not appeal to many districts or landholders, given that their repayments are otherwise due in fixed, interest-free amounts spread over many years. In addition, some noted, the districts or landholders may not be in a financial position to repay their outstanding repayment obligations on a lump-sum or accelerated basis. On the other hand, as described above, of the 87 irrigation districts that had early repayment authority, most of the districts, or at least some of the landholders within those districts, exercised such authority and repaid their obligations early. Based on our analysis, we found that early repayment more quickly eliminates certain restrictions and requirements for a landholder, which may provide an incentive for the landholder or the district to seek and exercise early repayment authority, depending on their circumstances. Specifically, we found that early repayment has various implications for the federal government, irrigation districts, and landholders, as follows. Early repayment affects the financial return to the federal government, largely depending on whether a discount may be authorized, such as calculating the present value of the outstanding repayment obligation to determine the amount to be repaid early, and the size of that discount. If no discounts are authorized, any repayments that occur earlier than the due date specified in the contract would be worth more to the government because irrigation districts’ repayments do not bear interest. By receiving lump-sum or accelerated payments early for the outstanding repayment obligations, the government avoids the loss in value that would otherwise occur with repayments made over time. For example, if in 2014 an irrigation district were to make a lump-sum payment of $100,000 that would otherwise be due in annual installments through 2030 (e.g., about $5,882 per year for 17 years), the government would receive that money sooner. Looked at another way, if the irrigation district were to continue making annual repayments over time, rather than repay early, the value to the government of $100,000 paid in full after annual installments ending in 2030 would be approximately $74,220 in 2014 dollars. Reclamation officials told us that in most instances where irrigation districts or landholders exercised their authority to repay early, the early repayment amounts reflected their outstanding repayment obligations, and the agency did not apply any discounts. If early repayment authority provides a discount toward the outstanding repayment obligation, however, the value of the return to the government is reduced compared with repayment of the full outstanding amount. In recent years, a few statutes have granted certain irrigation districts a discount. For example, legislation enacted in 2009 required certain Central Valley Project irrigation districts to repay their outstanding repayment obligations early, at a discount of half the 20-year Treasury rate. This discount was intended to offset the irrigation districts’ borrowing costs in obtaining loans to facilitate their early repayments, according to an attorney who represented the districts. In this example, the discount may have incentivized the irrigation districts to repay early, but it also reduced the financial return to the federal government compared with early repayment without a discount. Specifically, Reclamation data indicate that, if no discount had been applied, the early return to the government would have been $236.7 million, rather than the $200.1 million that was repaid based on the discount.such a discount had not been provided, fewer irrigation districts may have exercised their early repayment authority, and a larger discount would have resulted in a smaller return to the government. On the other hand, if Based on past early repayments, some irrigation districts and landholders may be motivated to repay early without a discount, but Reclamation officials told us that they believe some kind of discount would be needed to incentivize many irrigation districts to consider early repayment, were it to be authorized. Under certain scenarios, authorizing a discount could result in early repayment ultimately being worth much less to the federal government compared with repayment of the full outstanding amount. For example, in 2012, the Congressional Budget Office analyzed proposed legislation that would have expanded early repayment authority to all irrigation districts in the Central Valley Project. according to that analysis, the proposed legislation would have permitted early repayments at levels approximating the present value, by applying the 20-year Treasury rate, of the irrigation districts’ outstanding repayment obligations. The Congressional Budget Office estimated that if this legislation were enacted and early repayment authority were exercised by the majority of those irrigation districts, it would result in a net loss of $176 million to the government over the long-term. Congressional Budget Office, “Cost Estimate: H.R. 1837 Sacramento-San Joaquin Valley Water Reliability Act” (Washington, D.C.: Feb. 27, 2012). ownership limits and are charged full-cost water rates on land irrigated in excess of the amount subject to the pricing limitations. Full-cost water rates include interest charges on the landholders’ remaining allocated portion of construction costs and can be substantially higher than the subsidized rates charged for acres under the statutory pricing limitations. For instance, officials in one irrigation district told us their full-cost water rates were roughly double and, in another district, about 30 times higher than the subsidized rates. Once irrigation districts or landholders have repaid their construction cost obligations in full— whether early, or as scheduled by the terms of their contract—the landholders are no longer subject to these acreage and pricing limitations. As a result, landholders may be able to receive project water, to the extent it is available, on additional land or at a subsidized rate once they have fulfilled their repayment obligations. Reclamation officials told us that any foregone income in future years from full-cost water rates would reduce the return to the federal government associated with early repayment. According to Reclamation officials, the agency collected approximately $146.8 million from January 1988 through December 2013 in full-cost water rates from landholders who were irrigating land in excess of the amount subject to the statutory pricing limitations. Thus, if early repayment authority were exercised by those landholders, then the loss of full-cost water rate revenue in future years would at least partially offset the return to the government from early repayments. In addition, Reclamation officials and others we interviewed stated that early repayment would allow for the possibility of larger entities receiving project water at subsidized rates on larger landholdings sooner than intended under reclamation law—one of Reclamation’s early goals in developing water projects throughout the western United States was to promote farming opportunities for small, family-owned operations. Other irrigation district officials told us, however, that even though their districts had landholders with excess acres who may be interested in early repayment, the elimination of acreage and pricing limitations would not likely serve as an incentive for the districts as a whole to repay early. For example, one official stated that her district would have to finance a loan to make early repayments on a lump-sum or accelerated basis, which did not make sense compared with making annual, interest-free repayments under the terms of the contract. Early repayment also eliminates annual reporting requirements for landholders earlier than if repayment was made by the due date specified in the contract. Until their construction cost obligations are repaid, landholders are subject to annual reporting requirements to ensure landholders’ compliance with acreage and pricing limitations. According to irrigation district officials and landholders we interviewed, completing these reports can be difficult and time-consuming for landholders and for districts, which must complete a form for Reclamation summarizing the reports submitted by landholders. For example, one landholder in Oregon said that it repaid its construction cost obligation early, after receiving statutory authority to do so in 2005, in part to eliminate the need to submit the annual reports. On the other hand, some irrigation district officials told us that while the reporting requirements were burdensome, eliminating the reporting requirements would not be a sufficient reason for the district to repay early, if granted the authority, without other incentives. Early repayment potentially provides irrigation districts with a greater assurance of receiving available project water on a permanent basis. The right to water is generally determined by state law—which varies by state and can be complex—so repayment and water service contracts do not provide a right to water under state law. Under federal reclamation law, however, these contracts give irrigation districts assurance of a specified amount of water from the project’s available water supply, which becomes permanent upon completion of repayment of the construction costs Securing a permanent right to project water in allocated to the districts.a geographic area where water supply is uncertain was a key motivation in the Central Valley Project irrigation districts’ desire to convert their contracts and repay their construction costs early, according to an attorney who represented those districts in pursuing and receiving such authority in 2009. In addition, for irrigation districts that receive and exercise authority to convert their water service contracts to repayment contracts and repay early, the need for Reclamation and the districts to renegotiate water service contracts when they expire is eliminated, according to agency officials. Reclamation officials and the attorney representing the Central Valley Project irrigation districts told us that renegotiating the terms of water service contracts can be time-consuming and unpredictable for landholders and their agricultural businesses and, therefore, repayment contracts may be preferable over water service contracts. On the other hand, the agency’s flexibility for responding to water shortages, drought, and climate change-related issues could be limited as a result of fixing the amount of water an irrigation district receives under a repayment contract in perpetuity, according to a statement made by Reclamation’s Commissioner in 2011. With population, agricultural production, and development in the West projected to continue to increase, Reclamation may be called upon to modify or expand existing capacity for water storage or delivery. In considering potential new work and affiliated funding arrangements, Congress, as well as water users and the public, may benefit from evaluating information on past water projects. In particular, Congress and others may want to assess information on how costs were allocated and how funding and repayment arrangements were established among various water users in the past. Reclamation compiles such information in the repayment statements it prepares annually for each water project with outstanding repayment obligations. However, Reclamation does not make it readily known to the public that this information is available upon request. By further disseminating information to the public that construction cost and repayment data are available, Reclamation may increase interested parties’ opportunities to obtain cost and repayment information, and Reclamation would promote transparency and potentially increase informed participation by the public. This, in turn, could further enable Congress, water users, and the public to assess past funding arrangements and enhance their ability to make informed decisions for funding potential new work, such as to expand water storage capacity. Consistent with Reclamation’s policy to make construction cost repayment statements available to the public upon request, and to promote transparency and increase informed participation by Congress, water users, and the public, the Secretary of the Interior should direct Reclamation to better promote to the public that annual statements of project construction cost and repayment are available. We provided a draft of this report to the Department of the Interior for review and comment. On August 13, 2014, the department’s audit liaison indicated in an e-mail that the department concurred with the recommendation and did not have any other comments. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of the Interior, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. This appendix provides information on the scope of our work and the methodology used for the following objectives: examine (1) the extent to which Reclamation collects and reports information on water project construction costs and the status of repayment by irrigation districts and (2) the extent to which irrigation districts can repay their allocated water project construction costs early and the implications of early repayment. In conducting our work, we reviewed the Reclamation Act of 1902, the Reclamation Project Act of 1939, the Reclamation Reform Act of 1982, and other relevant laws. We reviewed Reclamation policies and directives and other Reclamation documents on water project construction cost allocation, repayment, and early repayment of construction costs. We also reviewed our July 1996 report on the status of construction cost allocations and repayments. In addition, we conducted interviews with knowledgeable Reclamation officials at the agency’s central office in Denver, Colorado, and all five regional offices (Great Plains, Lower Colorado, Mid-Pacific, Pacific Northwest, and Upper Colorado) about issues related to the status of repayment and early repayment. For our interviews with officials from each of the regional offices, we developed a set list of open-ended questions to obtain information and documentation on information they maintain on water project construction cost allocation and repayment information and the use and availability to the public of this information, as well as the opportunities for and potential implications of early repayment, among other things. To determine the extent to which Reclamation collects and reports information on water project construction costs and the status of repayment by irrigation districts, we analyzed Reclamation’s Statements of Project Construction Cost and Repayment (repayment statements) for fiscal year 2012, the most current data available at the time of our review. Reclamation provided repayment statements for 76 projects with outstanding repayment obligations with irrigation districts, and provided repayment statements for 43 of 54 projects with irrigation that no longer had outstanding obligations with irrigation districts (Reclamation policy calls for repayment statements to be prepared annually for all water projects with construction cost repayments outstanding. This policy does not apply to water projects where all water users, including irrigation districts, have repaid their construction cost allocations, and per Reclamation policy, preparing repayment statements for these projects is optional). The data contained in repayment statements are generally tied The repayment statements are prepared to audited accounting records.annually by the regional offices for each water project that has construction costs allocated to one or more water users with an outstanding repayment obligation. The repayment statements contain information on total costs for the water project, including construction costs incurred as of the end of the fiscal year; estimated future construction costs, and other costs that Reclamation includes in its repayment analysis for construction costs, such as capitalized operation and maintenance costs; the allocation of construction costs among project purposes, including irrigation; and the status of repayment for costs allocated to each project purpose, including repayment realized, anticipated future repayment, and any financial assistance granted to irrigation districts, such as credits, which relieve water users from a portion of their allocated repayment obligations. To analyze and interpret the data contained in the repayment statements, we relied, in part, on the relevant financial standards section on repayment statements in the Reclamation Manual, which provides guidance on the content and format for repayment statements. When the data in a repayment statement included estimated future construction costs, we subtracted these estimated costs from the projects’ total costs because such costs have not yet been and, in some cases, may never be, incurred. To assess the reliability of Reclamation repayment data, we took steps such as reviewing the guidance for developing repayment statements in the Reclamation Manual; interviewing Reclamation officials from all five regional offices who were involved in preparing the repayment statements, as well as officials from Reclamation’s central finance office in Denver; identifying the sources of data included in the repayment statements and the agency’s review process; and following up with Reclamation officials to obtain clarifying information in instances where we identified discrepancies in the data. On the basis of these steps, we found the repayment statements to be sufficiently reliable for the purposes of this report. We reviewed information from each of the five regional offices on the number of repayment and water service contracts in their respective regions where irrigation districts were making repayments on their allocated construction cost obligations, as of July 2014, for the Lower Colorado, Mid-Pacific, Pacific Northwest and Upper Colorado regions and, as of November 2013, for the Great Plains region. To assess the reliability of the data provided by the regions concerning the number of contracts of each type, we asked Reclamation officials a standard set of questions concerning the reliability of the data and reviewed corresponding documentation, and we found the data sufficiently reliable for the purposes of our report. We also reviewed Reclamation’s policies and practices on making cost allocation and repayment information— specifically, its repayment statements—available to the public, as well as the Office of Management and Budget’s open government directive and associated documentation related to ensuring the transparency of government information to the public. To examine the extent to which irrigation districts can repay their allocated water project construction costs early, and the implications of early repayment, we reviewed applicable laws, policies, and other relevant documents. We also collected data from Reclamation’s regional offices on irrigation districts that have contractual or statutory authority to repay early, districts that have exercised such authority, and the dates and amounts of early repayments through December 2013. To assess the reliability of the data provided by Reclamation concerning early repayment, we asked Reclamation officials a standard set of questions concerning the reliability of the data and reviewed corresponding documentation, and we found the data sufficiently reliable for the purposes of our report. In addition, we conducted legal research to identify statutes that provide irrigation districts with the authority to repay their construction cost obligations early. To help identify the implications of early repayment, we reviewed the Reclamation Reform Act of 1982 and other laws and regulations that establish acreage and pricing limitations and reporting requirements for landholders until their repayment obligations are fulfilled. We also reviewed testimonies and a statement for the record by Reclamation on draft legislation that would have authorized early repayment for additional irrigation districts, and we reviewed Congressional Budget Office cost estimates of various bills since 2005 that proposed expanding early repayment authority to certain irrigation districts. For both objectives, we conducted interviews with officials from a nonprobability sample of eight irrigation districts and two landholders from five water projects located in California, Nebraska, Oregon, and Wyoming to collect information on the repayment of construction costs and related issues. We selected these irrigation districts and landholders using criteria such as the type of contracts the districts held with Reclamation (repayment or water service contracts), their status of repayment, and whether or not the districts had early repayment authority. We also interviewed a nonprobability sample of nine individuals knowledgeable about Reclamation water projects on the status of repayments, early repayment authority, or both. Using the “snowball sampling” technique, we identified these individuals by asking for referrals to others knowledgeable about Reclamation water projects and their repayment from others whom we had previously interviewed. Specifically, we interviewed staff from the Congressional Research Service and Congressional Budget Office, attorneys who have represented irrigation districts pursuing enactment of legislation authorizing early repayment, an attorney who has represented environmental organizations in litigation concerning Reclamation water projects, an environmental consultant, former congressional staff, and officials from the Family Farm Alliance and Taxpayers for Common Sense. We conducted this performance audit from June 2013 to September 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The following two tables provide information on construction cost allocations by project purpose (table 3) and repayment status of construction costs allocated to irrigation (table 4) for 54 Bureau of Reclamation water projects for which irrigation districts have fulfilled their repayment obligations, as of the end of fiscal year 2012. The following two tables provide information on construction cost allocations by project purpose (table 5) and repayment status of construction costs allocated to irrigation (table 6) for 76 Bureau of Reclamation water projects with ongoing repayments by irrigation districts, as of the end of fiscal year 2012. The following two tables provide information on irrigation districts with contractual authority for landholders to repay their outstanding construction cost obligations early, and early repayments made (table 7) and irrigation districts with statutory authority for the districts or landholders to repay their outstanding construction cost obligations early, and early repayments made (table 8), as of December 2013. In addition to the individual named above, Alyssa M. Hundrup (Assistant Director), Josey Ballenger, Marya Link, and Jeanette Soares made key contributions to this report. Stephen Brown, Cindy Gilbert, Paul Kinney, and Alison O’Neill also provided assistance. | Since 1902, Reclamation has financed and built water projects to provide water for irrigation and various other uses in 17 western states. The costs to construct the water projects including irrigation as a project purpose—a combined total of more than $20 billion—were primarily financed by the federal government, but irrigation districts and other water users that receive project water are obligated to repay the government for their allocated share of construction costs. Reclamation typically enters into multiyear contracts with irrigation districts that establish water delivery and repayment of their share of construction costs over time. GAO was asked to provide information on the status of irrigation repayments. This report examines (1) the extent to which Reclamation collects and reports information on construction costs and the status of repayment and (2) the extent to which irrigation districts can repay early and the implications of early repayment. GAO reviewed laws and policies; fiscal year 2012 construction cost repayment and early repayment data; and interviewed Reclamation officials and nonprobability samples of eight irrigation districts and nine individuals knowledgeable about water projects. The Department of the Interior's Bureau of Reclamation collects information on water project construction costs and the status of repayment by irrigation districts—entities that have entered into contracts with the agency to receive project water for irrigation purposes—but has not publicly reported repayment information since the 1980s. Reclamation's data on water project construction cost repayments indicate that, of the $6.4 billion in costs allocated to irrigation as of the end of fiscal year 2012, $1.6 billion remains outstanding. The remaining $4.8 billion has been repaid by irrigation districts or through other revenue sources or will be provided in financial assistance to the districts. Reclamation's policy is to make the statements it prepares annually on repayment available to the public upon request, but the agency does not make it readily known to the public that it prepares these statements or that they are available. GAO interviewed individuals knowledgeable of Reclamation water projects who indicated that this information would be useful for their work, such as in considering funding arrangements for the expansion of water projects; some individuals were not aware that Reclamation prepares repayment statements annually, or that the agency would make them available upon request. By more widely disseminating information to the public that construction cost and repayment data are available, Reclamation may increase interested parties' opportunities to obtain cost and repayment information. This, in turn, could further enable Congress, water users, and the public to assess past funding arrangements and enhance their ability to make informed decisions for funding potential new work, such as to expand water storage capacity. The authority for irrigation districts—or for landholders who own or lease land for agricultural purposes within those districts—to repay their allocated share of construction costs early is limited to a small number of districts, and its use has various financial and other implications. Early repayment authority allows irrigation districts or landholders to repay their total outstanding repayment obligations in advance of the date specified in the districts' contracts. As of December 2013, 87 irrigation districts—representing about 15 percent of all districts with contracts—had authority for the district or its landholders to repay early. Of those authorized, 69 irrigation districts either repaid early, or had some landholders who repaid early, with those payments totaling more than $238.9 million. GAO found that early repayment's effect on the financial return to the federal government largely depends on whether a discount may be authorized, such as calculating the present value of the outstanding repayment obligation to determine the amount to be repaid early, and the size of that discount. If no discounts are authorized, any early repayments that occur would be worth more to the government because the repayments do not bear interest. In addition, early repayment accelerates the elimination of certain restrictions and requirements for landholders that are in place until their repayment obligation is fulfilled. For example, once landholders have fully repaid their construction cost obligations, they are no longer subject to acreage limits on the amount of land they can own or lease for agricultural purposes and irrigate with project water and may be able to receive project water on additional land. GAO recommends that Reclamation better promote to the public that information on water projects' construction costs and repayment status is available. The Department of the Interior concurred with the recommendation. |
US-VISIT is a large, complex governmentwide program intended to collect, maintain, and share information on certain foreign nationals who enter and exit the United States; identify foreign nationals who (1) have overstayed or violated the terms of their visit; (2) can receive, extend, or adjust their immigration status; or (3) should be apprehended or detained by law enforcement officials; detect fraudulent travel documents, verify visitor identity, and determine visitor admissibility through the use of biometrics (digital fingerprints and a digital photograph); and facilitate information sharing and coordination within the immigration and border management community. The US-VISIT Program Office has responsibility for managing the acquisition, deployment, operation, and sustainment of US-VISIT and has been delivering US-VISIT capability incrementally based, in part, on statutory deadlines for implementing specific portions of US-VISIT. For example, the statutory deadline for implementing US-VISIT at the 50 busiest land POEs was December 31, 2004, and at the remaining POEs, December 31, 2005. From fiscal year 2003 through fiscal year 2007, total funding for the US-VISIT program has been about $1.7 billion. According to program officials, as of January 31, 2007, almost $1.3 billion has been obligated to acquire, develop, deploy, enhance, operate, and maintain US- VISIT entry capabilities, and to test and evaluate exit capability options. Since 2003, DHS has planned to deliver US-VISIT capability in four increments: Increment 1 (air and sea entry and exit), Increment 2 (air, sea, and land entry and exit), Increment 3 (land entry), and Increment 4, which is to define, design, build, and implement more strategic program capability, and which program officials stated will consist of a series of incremental releases or mission capability enhancements that will support business outcomes. In Increments 1 through 3, the program has built interfaces among existing (“legacy”) systems, enhanced the capabilities of these systems, and deployed these capabilities to air, sea, and land POEs. The capabilities that DHS currently has regarding the first three increments have been largely acquired and implemented through existing system contracts and task orders. In reports on US-VISIT over the last several years, we have identified numerous challenges that DHS faces in delivering program capabilities and benefits on time and within budget. In September 2003, we reported that the US-VISIT program is a risky endeavor, both because of the type of program it is (large, complex, and potentially costly) and because of the way that it was being managed. We reported, for example, that the program’s acquisition management process had not been established, and that US-VISIT lacked a governance structure. In March 2004, we testified that DHS faces a major challenge maintaining border security while still welcoming visitors. Preventing the entry of persons who pose a threat to the United States cannot be guaranteed, and the missed entry of just one can have severe consequences. Also, US-VISIT is to achieve the important law enforcement goal of identifying those who overstay or otherwise violate the terms of their visas. Complicating the achievement of these security and law enforcement goals are other key US-VISIT goals: facilitating trade and travel through POEs and providing for enforcement of U.S. privacy laws and regulations. Subsequently, in May 2004, we reported that DHS had not employed the kind of rigorous and disciplined management controls typically associated with successful programs. Moreover, in February 2006, we reported that while DHS had taken steps to implement most of the recommendations from our 2003 and 2004 reports, progress in critical areas had been slow. As of February 2006, of 18 recommendations we made since 2003, only 2 had been fully implemented, 11 had been partially implemented, and 5 were in the process of being implemented, although the extent to which they would be fully carried out is not yet known. In addition, in June 2006, we reported that US-VISIT contract and financial management needed to be strengthened; in December 2006, we reported that the US-VISIT program faced strategic, operational and technological challenges at land ports of entry; and in February 2007, we reported that planned expenditures for the US-VISIT program needed to be adequately defined and justified. Currently, US-VISIT’s scope includes the pre-entry, entry, status, and exit of hundreds of millions of foreign national travelers who enter and leave the United States at over 300 air, sea, and land POEs. Most land border crossers—including U.S. citizens, lawful permanent residents, and most Canadian and Mexican citizens—are, by regulation or statute, not required to enroll into US-VISIT. In fiscal year 2004, for example, U.S. citizens and lawful permanent residents constituted about 57 percent of land border crossers; Canadian and Mexican citizens constituted about 41 percent; and less than 2 percent were US-VISIT enrollees. Figure 1 shows the number and percentage of persons processed under US-VISIT as a percentage of all border crossings at land, air, and sea POEs in fiscal year 2004. Foreign nationals subject to US-VISIT who intend to enter the country encounter different inspection processes at different types of POEs depending on their mode of travel. Those who intend to enter the United States at an air or sea POE are to be processed, for purposes of US-VISIT, in the primary inspection area upon arrival. Generally, these visitors are subject to prescreening, before they arrive, via passenger manifests, which are forwarded to CBP by commercial air or sea carrier in advance of arrival. By contrast, foreign nationals intending to enter the United States at land POEs are generally not subject to prescreening because they arrive in private vehicles or on foot and there is no manifest to record their pending arrival. Thus, when foreign nationals subject to US-VISIT arrive at a land POE in vehicles, they initially enter the primary inspection area where CBP officers, often located in booths, are to visually inspect travel documents and query the visitors about such matters as their place of birth and proposed destination. Visitors arriving as pedestrians enter an equivalent primary inspection area, generally inside a CBP building. If the CBP officer believes a more detailed inspection is needed or if the visitors are required to be processed under US-VISIT, the visitors are to be referred to the secondary inspection area—an area away from the primary inspection area—which is generally inside a facility. The secondary inspection area inside the facility generally contains office space, waiting areas, and space to process visitors, including US-VISIT enrollees. Equipment used for US-VISIT processing includes a computer, printer, digital camera, and a two-fingerprint scanner. Visitors covered by US- VISIT who are determined to be admissible are issued an I-94 arrival/departure form, which, among other things, records their date of arrival and the date their authorized period of admission expires. The US-VISIT program office has largely met its expectations relative to a biometric entry capability. For example, on January 5, 2004, it deployed and began operating most aspects of its planned biometric entry capability at 115 airports and 14 seaports for selected foreign nationals, including those from visa waiver countries; as of December 2006, the program office had deployed and began operating this entry capability in the secondary inspection areas of 154 of 170 land POEs. According to program officials, 14 of the remaining 16 POEs have no operational need to deploy US-VISIT because visitors who are required to be processed through US- VISIT are, by regulation, not authorized to enter into the United States at these locations. The other two POEs do not have entry capability deployed because they do not have the necessary transmission lines to operate US-VISIT; CBP officers at those sites have continued to process visitors manually. CBP officials told us that US-VISIT’s entry capability has generally enhanced their ability to process visitors subject to US-VISIT by providing assurance that visitors’ identities can be confirmed through biometric identifiers and by automating the paperwork associated with processing I-94 arrival/departure forms. To the department’s credit, the development and deployment of this entry capability was largely in accordance with legislative time lines and has occurred during a period of considerable organizational change, starting with the creation of DHS from 23 separate agencies in early 2003, followed by the birth of a US-VISIT program office shortly thereafter—which was only about 5 months before the program had to meet its first legislative milestone. Compounding these program challenges was the fact that the systems that were to be used in building and deploying a biometric entry capability were managed and operated by a number of the separate agencies that had been merged to form the new department, each of which was governed by different policies, procedures, and standards. Moreover, DHS reports that US-VISIT entry capabilities have produced results. According to US-VISIT's Consolidated Weekly Summary Report, as of December 28, 2006, there have been more than 5,400 biometric hits in primary entry, resulting in more than 1,300 people having adverse actions, such as denial of entry, taken against them. According to the report, about 4,100 of these hits occurred at air and sea ports of entry and over 1,300 at land ports of entry. Further, the report indicates that more than 1,800 biometric hits have been referred to DHS's immigration enforcement unit, resulting in 293 arrests. We did not verify the information in the consolidated report. Another potential consequence, although difficult to demonstrate, is the deterrent effect of having an operational entry capability. Although deterrence is not an expressly stated goal of the program, officials have cited it as a potential byproduct of having a publicized capability at the border to screen entry on the basis of identity verification and matching against watch lists of known and suspected terrorists. Accordingly, the deterrent potential of the knowledge that unwanted entry may be thwarted and the perpetrators caught is arguably a layer of security that should not be overlooked. Despite these results, US-VISIT’s entry capability at land POEs has not been without operational and system performance problems. During recent visits to land POEs, we identified some space constraints and other capacity issues. For example, at the Nogales-Morley Gate POE in Arizona, where up to 6,000 visitors are processed daily (and up to 10,000 on holidays), equipment was installed but not used because of CBP concerns about its ability to carry out the US-VISIT process in a constrained space while thousands of other people not subject to US-VISIT are processed through the facility daily. Thus, visitors that are to be processed into US- VISIT from Morley Gate are directed to return to Mexico (a few feet away) and to walk approximately 100 yards to the Nogales-DeConcini POE facility, which has the capability to handle secondary inspections of this kind. Going forward, DHS plans to introduce changes and enhancements to US- VISIT at land POEs intended to further bolster CBP’s ability to verify the identity of individuals entering the country, including a transition from digitally scanning 2 fingerprints to scanning 10. While such changes are intended to further enhance border security, deploying them may have an impact on aging and space-constrained land POE facilities because they could increase inspection times and adversely affect POE operations. Our site visits, interviews with US-VISIT and CBP officials, and the work of others suggest that both before and after US-VISIT entry capability was installed at land POEs, these facilities faced a number of challenges— operational and physical—including space constraints complicated by the logistics of processing high volumes of visitors and associated traffic congestion. Moreover, our work over the past 3 years showed that the US- VISIT program office had not taken necessary steps to help ensure that US-VISIT entry capability operates as intended. For example, in February 2006 we reported that the approach taken by the US-VISIT Program Office to evaluate the impact of US-VISIT on land POE facilities focused on changes in I-94 processing time at 5 POEs and did not examine other operational factors, such as US-VISIT’s impact on physical facilities or work force requirements. As a result, program officials did not always have the information they needed to anticipate problems that occurred, such as problems processing high volumes of visitors in space-constrained facilities. In addition, we found that management controls did not always alert US- VISIT and CBP to operational problems. Our standards for internal controls in the federal government state that it is important for agencies to have controls in place to help ensure that policies and procedures are applied and that managers be made aware of problems so that that they can be addressed and resolved in a timely fashion. CBP officials at 12 of 21 land POE sites we visited told us about US-VISIT-related computer slowdowns and freezes that adversely affected visitor processing and inspection times, and at 9 of the 12 sites, computer processing problems were not always reported to CBP’s computer help desk, as required by CBP guidelines. Although various controls are in place to alert US-VISIT and CBP officials to problems as they occur, these controls did not alert officials to all problems, given that they had been unaware of the problems we identified before we brought them to their attention. These computer processing problems have the potential to not only inconvenience travelers because of the increased time needed to complete the inspection process, but to compromise security, particularly if CBP officers are unable to perform biometric checks—one of the critical reasons US-VISIT was installed at POEs. Our internal control standards also call for agencies to establish performance measures throughout the organization so that actual performance can be compared to expected results. While the US-VISIT Program Office established performance measures for fiscal years 2005 and 2006 intended to gauge performance of various aspects of US-VISIT at air, sea, and land POEs in the aggregate, performance measures specifically for land POEs had not been developed. It is important to do so, given that there are significant operational and facility differences among these different types of POEs. Additional performance measures that consider operational and facility differences at land POEs would put US- VISIT program officials in a better position to identify problems, trends, and areas needing improvements. DHS has devoted considerable time and resources toward establishing an operational exit capability. Over the last 4 years, it has committed over $160 million to pilot test and evaluate an exit solution at 12 air, 2 sea, and 5 land POEs. Despite this considerable investment of time and resources, the US-VISIT program still does not have either an operational exit capability or a viable exit solution to deploy to all air, sea, and land POEs. Although US-VISIT is pilot testing a biometric exit capability for air and sea POEs, it is not currently available at all ports. In January 2004, devices for collecting biometric data were deployed to one airport and one seaport on a pilot basis. Subsequently, this pilot was expanded to 12 airports and 2 seaports. The pilot tested several exit alternatives, including an enhanced kiosk (a self-service device that captures a digital photograph and fingerprint, and prints out an encoded receipt), a mobile device (a hand- held device operated by a workstation attendant that captures a digital photograph and fingerprint), and a validator (a hand-held device operated by a workstation attendant that captures a digital photograph and fingerprint and then matches the captured photograph and fingerprint to the ones originally captured via the kiosk and encoded in the receipt). Each alternative required the traveler to comply with inspection processes. The pilot was completed in May 2005, and established the technical feasibility of a biometric exit solution. However, it identified issues that limited the operational effectiveness of the solution, such as the lack of traveler compliance with the processes. The fiscal year 2006 expenditure plan allocated $33.5 million to continue the exit pilots for air and sea POEs. According to program officials, US- VISIT is now developing a plan for deploying a comprehensive, affordable exit solution. However, no time frame has been established for this plan being approved or implemented. Meanwhile, US-VISIT plans to conduct a second pilot phase at air and sea POEs that will involve multiple operational scenarios which would compel greater traveler compliance, such as repositioning the kiosks, integrating biometric exit into airport check-in processes, integrating biometric exit into existing airline processes, integrating biometric exit into Transportation Security Administration screening checkpoints, and enhancing the use of Immigration and Customs Enforcement programs intended for enforcement, such as screening of targeted flights at selected airports. Various factors have prevented US-VISIT from implementing a biometric exit capability at land POEs. Federal laws require the creation of a US- VISIT exit capability using biometric verification methods to ensure that the identity of visitors leaving the country can be matched biometrically against their entry records. However, according to officials at the US- VISIT Program Office and CBP and US-VISIT program documentation, there are interrelated logistical, technological, and infrastructure constraints that have precluded DHS from achieving this mandate, and there are cost factors related to the feasibility of implementation of such a solution. The major constraint to performing biometric verification upon exit at this time, in the US-VISIT Program Office’s view, is that the only proven technology available would necessitate mirroring the processes currently in use for US-VISIT at entry. A mirror image system for exit would, like one for entry, require CBP officers at land POEs to examine the travel documents of those leaving the country, take fingerprints, compare visitors’ facial features to photographs, and, if questions about identity arise, direct the departing visitor to secondary inspection for additional questioning. These steps would be carried out for exiting pedestrians as well as for persons exiting in vehicles. The US-VISIT Program Office concluded in January 2005 that the mirror-imaging solution was “an infeasible alternative for numerous reasons, including but not limited to, the additional staffing demands, new infrastructure requirements, and potential trade and commerce impacts.” US-VISIT officials told us that they anticipated that a biometric exit process mirroring that used for entry could result in delays at land POEs with heavy daily volumes of visitors. And they stated that in order to implement a mirror image biometric exit capability, additional lanes for exiting vehicles and additional inspection booths and staff would be needed, though they had not determined precisely how many. According to these officials, it is unclear how new traffic lanes and new facilities could be built at land POEs where space constraints already exist, such as those in congested urban areas. (For example, San Ysidro, California, currently has 24 entry lanes, each with its own staffed booth and 6 unstaffed exit lanes. Thus, if full biometric exit capability were implemented using a mirror image approach, San Ysidro’s current capacity of 6 exit lanes would have to be expanded to 24 exit lanes.) As shown in figure 3, based on observations during our site visit to the San Ysidro POE, the facility is surrounded by dense urban infrastructure, leaving little, if any, room to expand in place. Some of the 24 entry lanes for vehicle traffic heading northward from Mexico into the United States appear in the bottom left portion of the photograph, where vehicles are shown waiting to approach primary inspection at the facility; the 6 exit lanes (traffic toward Mexico), which do not have fixed inspection facilities, are at the upper left. Other POE facilities are similarly space-constrained. At the POE at Nogales-DeConcini, Arizona, for example, we observed that the facility is bordered by railroad tracks, a parking lot, and industrial or commercial buildings. In addition, CBP has identified space constraints at some rural POEs. For example, the Thousand Islands Bridge POE at Alexandria Bay, New York, is situated in what POE officials described as a “geological bowl,” with tall rock outcroppings potentially hindering the ability to expand facilities at the current location. Officials told us that in order to accommodate existing and anticipated traffic volume upon entry, they are in the early stages of planning to build an entirely new POE on a hill about a half-mile south of the present facility. CBP officials at the Blaine-Peace Arch POE in Washington state said that CBP also is considering whether to relocate and expand the POE facility, within the next 5 to 10 years, to better handle existing and projected traffic volume. According to the US- VISIT program officials, none of the plans for any expanded, renovated, or relocated POE include a mirror image addition of exit lanes or facilities comparable to those existing for entry. In 2003, the US-VISIT Program Office estimated that it would cost approximately $3 billion to implement US-VISIT entry and exit capability at land POEs where US-VISIT was likely to be installed and that such an effort would have a major impact on facility infrastructure at land POEs. We did not assess the reliability of the 2003 estimate. The cost estimate did not separately break out costs for entry and exit construction, but did factor in the cost for building additional exit vehicle lanes and booths as well as buildings and other infrastructure that would be required to accommodate a mirror imaging at exit of the capabilities required for entry processing. US-VISIT program officials told us that they provided this estimate to congressional staff during a briefing, but that the reaction to this projected cost was negative and that they therefore did not move ahead with this option. No subsequent cost estimate updates had been prepared, and DHS’s annual budget requests have not included funds to build the infrastructure that would be associated with the required facilities. US-VISIT officials stated that they believe that technological advances over the next 5 to 10 years will make it possible to utilize alternative technologies that provide biometric verification of persons exiting the country without major changes to facility infrastructure and without requiring those exiting to stop and/or exit their vehicles, thereby precluding traffic backup, congestion, and resulting delays. US-VISIT’s report assessing biometric alternatives noted that although limitations in technology currently preclude the use of biometric identification because visitors would have to be stopped, the use of the as yet undeveloped biometric verification technology supports the long-term vision of the US- VISIT program. However, no such technology or device currently exists that would not have a major impact on facilities. The prospects for its development, manufacture, deployment, and reliable utilization are currently uncertain or unknown, although a prototype device that would permit a fingerprint to be read remotely without requiring the visitor to come to a full stop is under development. While logistical, technical, and cost constraints may prevent implementation of a biometrically based exit technology for US-VISIT at this time, it is important to note that there currently is not a legislatively mandated date for implementation of such a solution. The Intelligence Reform and Terrorism Prevention Act of 2004 requires US-VISIT to collect biometric exit data from all individuals who are required to provide biometric entry data. The act did not set a deadline, however, for requiring collection of biometric exit data from all individuals who are required to provide biometric entry data. Although US-VISIT had set a December 2007 deadline for implementing exit capability at the 50 busiest land POEs, US-VISIT has since determined that implementing exit capability by this date is no longer feasible, and a new date for doing so has not been set. US-VISIT has tested nonbiometric technology to record travelers’ departure, but testing showed numerous performance and reliability problems. Because there is at present no biometric technology that can be used to verify a traveler’s exit from the country at land POEs without also making major and costly changes to POE infrastructure and facilities, US- VISIT tested radio frequency identification (RFID) technology as a nonbiometric means of recording visitors as they exit. RFID technology can be used to electronically identify and gather information contained on a tag—in this case, a unique identifying number embedded in a tag on a visitor’s arrival/departure form—which an electronic reader at the POE is intended to detect. While RFID technology required few facility and infrastructure changes, US-VISIT’s testing and analysis at five land POEs at the northern and southern borders identified numerous performance and reliability problems, such as the failure of RFID readers to detect a majority of travelers’ tags during testing. For example, according to US- VISIT, at the Blaine-Pacific Highway test site, of 166 vehicles tested during a 1-week period, RFID readers correctly identified 14 percent—a sizable departure from the target read rate of 70 percent. Another problem that arose was that of cross-reads, in which multiple RFID readers installed on poles or structures over roads, called gantries, picked up information from the same visitor, regardless of whether the individual was entering or exiting in a vehicle or on foot. Thus, cross-reads resulted in inaccurate record keeping. Even if RFID deficiencies were to be fully addressed and deadlines set, questions remain. For example, the RFID solution did not meet the congressional requirement for a biometric exit capability because the technology that had been tested cannot meet a key goal of US-VISIT— ensuring that visitors who enter the country are the same ones who leave. By design, an RFID tag embedded in an I-94 arrival/departure form cannot provide the biometric identity-matching capability that is envisioned as part of a comprehensive entry/exit border security system using biometric identifiers for tracking overstays and others entering, exiting, and re- entering the country. Specifically, the RFID tag in the I-94 form cannot be physically tied to an individual. This situation means that while a document may be detected as leaving the country, the person to whom it was issued at time of entry may be somewhere else. DHS was to have reported to Congress by June 2005 on how the agency intended to fully implement a biometric entry/exit program. In February 2007, US-VISIT officials told us that this plan had been forwarded to the Office of Management and Budget (OMB) for review. According to statute, this plan is to include, among other things, a description of the manner in which the US-VISIT program meets the goals of a comprehensive entry and exit screening system—including both biometric entry and exit—and fulfills statutory obligations imposed on the program by several laws enacted between 1996 and 2002. Until such a plan is finalized and issued, DHS is not able to articulate how entry/exit concepts will fit together— including any interim nonbiometric solutions—and neither DHS nor Congress is positioned to prioritize and allocate resources for a US-VISIT exit capability or plan for the program’s future. Our work and other best practice research have shown that applying disciplined and rigorous management practices improves the likelihood of delivering expected capabilities on time and within budget. Such practices and processes include determining how the program fits within the larger context of an agency’s strategic plans and related operational and technology environments, whether the program will produce benefits in excess of costs over its useful life, and whether program impacts and options are being fully identified, considered, and addressed. To further ensure that programs are managed effectively, it is important that they be executed in accordance with acquisition and financial management requirements and best practices, and that progress against program commitments is defined and measured so that program officials can be held accountable for results. Over the last several years, we have reported on fundamental limitations in DHS’s efforts to define and justify the program’s future direction and to cost-effectively manage the delivery of promised capabilities on time and within budget. To a large degree, what is operating and what is not operating today, and what future program changes are underway and yet to be defined, are affected by these limitations. DHS needs to address these challenges going forward, and the recommendations that we made are aimed at encouraging this. Until these recommendations are fully implemented, the program will be at greater risk of not optimally meeting mission needs and falling short of meeting expectations. As we previously reported, agency programs need to properly fit within a common strategic context or frame of reference governing key aspects of program operations (such as who is to perform what functions, when and where they are to be performed, what information is to be used to perform them, and what rules and standards will govern the use of technology to support them). Without a clear operational context to guide and constrain both US-VISIT and other border security and immigration enforcement initiatives, DHS risks investing in programs and systems that are duplicative, are not interoperable, and do not optimize enterprisewide mission operations and produce intended outcomes. For almost 4 years, DHS has continued to pursue US-VISIT (both in terms of deploying interfaces between and enhancements to existing systems and in defining a longer-term, strategic US-VISIT solution) without producing the program’s operational context. In September 2003, we reported that DHS had not defined key aspects of the larger homeland security environment in which US-VISIT would need to operate. In the absence of a DHS-wide operational and technological context, program officials were making assumptions about certain policy and standards decisions that had not been made, such as whether official travel documents would be required for all persons who enter and exit the country—including U.S. and Canadian citizens—and how many fingerprints would be collected for biometric comparisons. We further reported that if the program office’s assumptions and decisions turned out to be inconsistent with subsequent policy or standards decisions, it would require US-VISIT rework. According to the program’s Chief Strategist, an immigration and border management strategic plan was drafted in March 2005 to show how US- VISIT is aligned with DHS’s organizational mission and to define an overall vision for immigration and border management. According to this official, the vision provides for an immigration and border management enterprise that unifies multiple departmental and external stakeholders around common objectives, strategies, processes, and infrastructures. As of February 2007, about 2 years later, we were told that this strategic plan has not yet been approved, although the program’s Acting Director stated that the plan is currently with OMB and should be provided to the House and Senate Appropriations Subcommittees on Homeland Security by March 2007. However, at the same time, US-VISIT has not taken steps to ensure that the direction that it is taking is both operationally and technologically aligned with DHS’s enterprise architecture (EA). As the report that we issued this week states, the DHS Enterprise Architecture Board, which is the DHS entity that determines EA compliance, has not reviewed the US- VISIT architecture compliance for more than 2 years. However, since August 2004, both US-VISIT and the EA have changed. For example, additional functionality, such as the interoperability of US-VISIT’s Automated Biometric Information System (IDENT) and the Department of Justice’s Integrated Automated Fingerprint Identification System (IAFIS), and the expansion of IDENT to collect t10 rather than 2 fingerprints, has been added. Also, two versions of the DHS EA have been issued since August 2004. While the strategic plan has not been approved or disseminated, the program office has developed a strategic vision and blueprint and begun to implement it. According to program officials, this future vision is to be delivered through a number of planned mission capability enhancements. Of these, the first enhancement is underway and is to provide several new capabilities, including what the program refers to as “Unique Identity,” which is to include the migration from the 2-fingerprint to 10-fingerprint collection at program enrollment. It is also to interoperate US-VISIT’s IDENT system and the Department of Justice’s IAFIS system. Currently, the US-VISIT officials plan to complete Unique Identity in several phases and have it fully operational by December 2009, although these plans have not yet approved by DHS. At this same time, DHS has launched other major border security programs without adequately defining the relationships to US-VISIT and each other. For example, the Intelligence Reform and Terrorism Prevention Act of 2004 directs DHS and the Department of State to develop and implement a plan, no later than June 2009, that requires U.S. citizens and foreign nationals of Canada, Bermuda, and Mexico to present a passport or other document or combination of documents deemed sufficient to show identity and citizenship to enter the United States (this is currently not a requirement for these individuals entering the United States via sea and land POEs from most countries within the western hemisphere). This effort, known as the Western Hemisphere Travel Initiative, was first announced in 2005. In May 2006, we reported that DHS and the Department of State had taken some steps to carry out the initiative, but they had a long way to go to implement their proposed plans. Among other things, key decisions had yet to be made about what documents other than a passport would be acceptable when U.S. citizens and citizens of Canada enter or return to the United States. Further, while DHS and Department of State had proposed an alternative form of passport, called a PASS card, that would rely on RFID technology to help DHS process U.S. citizens re-entering the country, DHS had not made decisions involving a broad set of considerations that include (1) utilizing security features to protect personal information, (2) ensuring that proper equipment and facilities are in place to facilitate crossings at land borders, and (3) enhancing compatibility with other border crossing technology already in use. DHS has also initiated another border security program, known as the Secure Border Initiative (SBI)—a multi-year, multi-billion dollar program, to secure the borders and reduce illegal immigration by installing state-of- the-art surveillance technologies along the border, increasing border security personnel, and ensuring information access to DHS personnel at and between POEs. Under SBI and its component, called SBInet, DHS plans to integrate personnel, infrastructures, technologies, and rapid response capability into a comprehensive border protection capability. DHS reports that, among other things, SBInet is to encompass both the northern and southern land borders, including the Great Lakes, under a unified border control strategy whereby CBP is to focus on the interdiction of cross-border violations between and at the land POEs, funneling traffic to the land POEs. As part of SBI, DHS also plans to focus on interior enforcement—disrupting and dismantling cross-border crime into the interior of the United States while locating and removing aliens who are present in the United States in violation of law. However, it is unclear how SBInet will be linked, if at all, to US-VISIT so that the two can share technology, infrastructure, and data. Clearly defining the dependencies among US-VISIT and programs like the Western Hemisphere Travel Initiative and SBI is important because there is commonality among their strategic goals and operational environments. For example, both US-VISIT and SBI share the goal of securing the POEs. Moreover, there is overlap in the data that each is to produce and use. For example, both US-VISIT and the Western Hemisphere Travel Initiative will require identification data for travelers at POEs. Despite these dependencies, DHS has yet to define these relationships or how they will be managed. Further, according to a March 6, 2006 memo from the DHS Joint Requirements Council, the US-VISIT strategic plan did not provide evidence of sufficient coordination between the program and the other entities involved in border security and immigration efforts. The council’s recommendation was that the strategic plan not be approved until greater coordination between US-VISIT and other components was addressed. According to the Acting Program Director, a number of efforts are underway to coordinate with other entities, such as with CBP on RFID, with the Coast Guard on development of a mobile biometric reader, and with State on standards for document readers. Without a clear, complete, transparent, and understood definition of how related programs and initiatives are to interact, US-VISIT and other border security and immigration enforcement programs run the risk of being defined and implemented in a way that does not optimize DHS-wide performance and results. The decision to invest in any system or capability should be based on reliable analyses of return on investment. That is, an agency should have reasonable assurance that a proposed program will produce mission value commensurate with expected costs and risks. According to OMB guidance, individual increments of major systems should be individually supported by analyses of benefits, cost, and risk. Thus far, DHS has yet to develop an adequate basis for knowing whether its incrementally deployed US-VISIT capabilities represent a good return on investment, particularly in light of shortfalls in DHS’s assessments of the program’s operational impacts, including costs of proposed capabilities. Without this knowledge, DHS will not know until after the fact whether it is investing wisely or pursuing cost-effective and affordable solutions. US-VISIT had not assessed the cost and benefits of its early increments. For example, we reported in September 2003 that it had not assessed the costs and benefits of Increment 1. Again, in February 2005, we reported that although the program office developed a cost-benefit analysis for its land entry capability, it had not justified the investment because the treatment of both benefits and costs were unclear and insufficient. Further, we reported that the cost estimates on which the cost-benefit analysis was based were of questionable reliability because effective cost- estimating practices were not followed. Most recently, in February 2006, we reported again that the program office had not justified its investment in its air and sea exit capability. For example, we reported that while the cost-benefit analysis explained why the investment was needed, and considered at least two alternatives to the status quo, which is consistent with OMB guidance for cost-benefit analyses, it did not include a complete uncertainty analysis for the three exit alternatives evaluated. Specifically, it did not include a sensitivity analysis for the three alternatives, which is a major part of an uncertainty analysis. A complete analysis of uncertainty is important because it provides decision makers with a perspective on the potential variability of the cost and benefit estimates should the facts, circumstances, and assumptions change. Further, the cost estimate upon which the analysis was based did not meet key criteria for reliable cost estimating. For example, it did not include a detailed work breakdown structure, which serves to organize and define the work to be performed so that associated costs can be identified and estimated. Further, as we state in our February 2007 report, DHS has devoted considerable time and resources toward establishing an operational exit capability at land, air, and sea POEs. For example, over the last 4 years, DHS has committed over $160 million to evaluate and operate exit pilots at selected air, sea, and land POEs. Notwithstanding this considerable investment of time and resources, the US-VISIT program still does not have either an operational exit capability or a viable exit solution to deploy to all air, sea, and land POEs. Moreover, US-VISIT exit pilot reports have raised concerns and limitations. For example, as we previously stated, land exit pilots experienced several performance problems, such as the failure of RFID readers to detect a majority of travelers’ tags during testing and cross- reads, in which multiple RFID readers installed on poles or structures over roads, called gantries, picked up information from the same visitor. Notwithstanding these results, we reported in February 2007 that the program office planned to invest another $33.5 million to continue its air and sea exit pilots. However, neither the fiscal year 2006 expenditure plan nor other exit-related program documentation adequately defined what these efforts entail or what they will accomplish. In particular, the plan and other exit-related documentation merely state that $33.5 million will be used to continue air and sea exit pilots while a comprehensive exit solution is developed. They do not adequately describe measurable outcomes (benefits and results) from the pilot efforts, or related cost, schedule, and capability commitments that will be met. Further, the plan does not recognize the challenges revealed from the prior exit efforts, nor does it show how proposed exit investments address these challenges. In addition, the plan allocates more funding for continuing the air and sea exit pilots ($33.5 million) than the prior year’s plan said would be needed to fully deploy an operational air and sea exit solution ($32 million). According to program officials, the air and sea exit pilots are being continued to maintain a presence intended to provide a deterrent effect at exit locations, and to gather additional data that could help support planning for a comprehensive exit solution. Moreover, US-VISIT reported in August 2006 that it planned to spend an additional $21.5 million to continue its land exit demonstration project without adequate justification. However, we reported in February 2007 that these plans lacked adequate justification in light of the problems we discussed earlier in this statement. Accordingly, program officials told us that they intend to terminate the land exit project until a comprehensive exit strategy can be developed. They have also stated that a small portion of the $21.5 million is to be used to close out the demonstration project and have requested that the remainder of the money be reprogrammed to support Unique Identity. Knowing how planned US-VISIT capabilities will impact POE operations is critical to US-VISIT investment decision makers. In May 2004, we reported that the program had not assessed how deploying entry capabilities at land POEs would impact the workforce and facilities. We questioned the validity of the program’s assumptions and plans concerning workforce and facilities, since the program lacked a basis for determining whether its assumptions were correct and thus whether its plans were adequate. Subsequently, the program office evaluated the operational performance of the land entry capability with the stated purpose of determining the effectiveness of its performance at the 50 busiest land POEs. For this evaluation, the program office established a baseline for comparing the average time it takes to issue and process entry/exit forms at 3 of these 50 POEs, and then conducted two evaluations of the processing times at the three POEs, one after the entry capability was deployed as a pilot, and another one 3 months later, after the entry capability was deployed to all 50 POEs. The evaluation results showed that the average processing times decreased for all three sites. Program officials concluded that these results supported their workforce and facility investment assumptions that no additional staff was required to support deployment of the entry capability and that minimal modifications were required at the facilities. However, the scope of the evaluations was not sufficient to satisfy the evaluations’ stated purpose for assessing the full impact of the entry capability. For example, the selection of the three sites, according to program officials, was based on a number of factors, including whether the sites already had sufficient staff to support the pilot. Selecting sites based on this factor is problematic because it presupposes that all not POEs have the staff needed to support the land entry capability. In addition, evaluation conditions were not always held constant: specifically, fewer workstations were used to process travelers in establishing the baseline processing times at two of the POEs than were used during the pilot evaluations. Moreover, CBP officials from a land port of entry that was not an evaluation site (San Ysidro) told us that US-VISIT deployment had not reduced but actually lengthened processing times. (San Ysidro processes the highest volume of travelers of all land POEs.) Although these officials did not provide specific data to support their statement, their perception nevertheless raises questions about the potential impact of land entry capabilities on the 47 sites that were not evaluated. Exacerbating this situation is the fact that DHS plans to introduce changes and enhancements to US-VISIT at land POEs to verify the identity of individuals entering the country, including a transition from digitally scanning 2 fingerprints to 10. While such changes are intended to further enhance border security, deploying them may have an impact on aging and spatially-constrained land POEs facilities because they could increase inspection times and adversely affect POEs operations. Moreover, the increase from 2 to 10 fingerprints can affect the capacity of the systems and communications networks processing because of the larger data sets being processed and transmitted (10 vs 2 fingerprints). This need for increased capacity will in turn affect program costs. The impact of planned exit capabilities at air and sea POEs has also not been adequately analyzed, and is thus not available to inform investment decisions. In February 2005, we reported that the program office had not adequately planned for evaluating its exit pilot at air and sea POEs because the pilot’s evaluation scope and timeline were compressed. As a result, the US-VISIT program office extended the pilot from 5 to 14 POEs (12 airports and 2 seaports). Notwithstanding the expanded scope of the pilot, the exit alternatives were not sufficiently evaluated. Specifically, the program office evaluated these alternatives against three criteria, including compliance with the exit process. According to the exit evaluation plan report, the average compliance rate across all three alternatives was only 24 percent. The evaluation report cited several reasons for the low compliance rate, including that compliance during the pilot was voluntary. As a result, the evaluation report concluded that national deployment of the exit solution will not meet the desired compliance rate unless the scope of the exit process is expanded to incorporate an enforcement mechanism, such as not allowing persons to reenter the United States if they do not comply with the exit process or not allowing persons to board a carrier until they are processed by an airline or the Transportation Security Administration. As of February 2006, program officials had not conducted any formal evaluation of enforcement mechanisms or their possible effect on compliance and cost, and according to the Acting Program Director, they do not plan to do so. Program management is an important and integral aspect of any system acquisition program. The importance of program management, however, does not by itself justify any level of investment in such activities. Rather, investments in program management capabilities should be viewed the same as investments in any program capability, meaning the scope, nature, size, and value of the investment should be disclosed and justified in relation to the size and significance of the acquisition activities being performed. As our February 2007 report states, US-VISIT’s planned investment in program management-related activities has risen steadily over the last 4 years, while planned investment in development of new program capabilities has declined. Figure 3 shows the breakdown of planned expenditures for US-VISIT fiscal year 2002 through 2006 expenditure plans. Specifically, the fiscal year 2003 expenditure plan provided $30 million for program management and operations and about $325 million for new development efforts, whereas the fiscal year 2006 plan provided $126 million for program management-related functions—an increase of $96 million—and $93 million for new development. This means that the fiscal year 2006 plan proposed expending $33 million more for program management and operations than it is for new development. The increase in planned program management-related expenditures is more pronounced if it is viewed as a percentage of planned development expenditures. Figure 4 shows planned US-VISIT expenditures for program management and operations as a percentage of development for fiscal years 2002 thru 2006. Specifically, planned program management-related expenditures represented about 9 percent of planned development in fiscal year 2003, but represented about 135 percent of fiscal year 2006 development, meaning that the fiscal year 2006 expenditure plan proposed spending about $1.35 on program management-related activities for each dollar spent on developing new US-VISIT capability. Moreover, the fiscal year 2006 expenditure plan did not explain the reasons for this recent growth or otherwise justify the sizeable proposed investment in program management and operations on the basis of measurable and expected value. Further, the plan did not adequately describe the range of planned program management and operations activities. Program officials told us that the DHS Acting Undersecretary for Management raised similar concerns about the large amount of program management and operations funding in the expenditure plan. In January 2007, DHS submitted a revised expenditure plan to the House and Senate Appropriations Subcommittees on Homeland Security, at the committee’s direction, to address their concerns. The revised plan allocates some program management funds to individual increments and to two new categories--program services and data integrity and biometric support, and program and project support contractor services. However, the revised plan still shows a relatively sizeable portion of proposed funding going toward program management-related activities. Managing major programs like US-VISIT requires applying discipline and rigor when acquiring and accounting for systems and services. Our work and other best practice research have shown that applying such rigorous management practices improves the likelihood of delivering expected capabilities on time and within budget. In other words, the quality of IT systems and services is largely governed by the quality of the management processes involved in acquiring and managing them. Some of these processes and practices are embodied in the Software Engineering Institute’s (SEI) Capability Maturity Models®, which define, among other things acquisition process management controls that, if implemented effectively, can greatly increase the chances of acquiring systems that provide promised capabilities on time and within budget. Other practices are captured in OMB guidance, which establishes policies for planning, budgeting, acquisition, and management of federal capital assets. Over the last several years, we have made numerous recommendations aimed at strengthening US-VISIT program management controls relative to acquisition management, including for example configuration management, security and privacy management, earned value management (EVM), and contract tracking and oversight. The program office has taken steps to lay the foundation for establishing several of these controls. For example, the program adopted the SEI Capability Maturity Model Integration (CMMI®) to guide its efforts to employ effective acquisition management practices, and approved an acquisition management process improvement plan dated May 16, 2005. The goal, as stated in the plan, was to conduct an independent CMMI assessment in October 2006 to affirm that requisite process controls were in place and operating. In September 2005, the program office completed an initial assessment of 13 key acquisition process areas that revealed a number of weaknesses. To begin addressing these weaknesses, the program office narrowed the scope of the process improvement activities from 13 to 6 (project planning, project monitoring and control, requirements development and management, configuration management, product and process quality assurance, and risk management) of the CMMI process areas and revised its process improvement plan in April 2006 to reflect these changes. In May 2006, the program conducted a second internal assessment of the six key process areas, and according to the results of this assessment, improvements were made, but weaknesses remained in all six process areas. For example, a number of key acquisition management documents were not adequately prepared and processes were not sufficiently defined, including those related to systems development, budget and finance, facilities, and strategic planning (e.g., product work flow among organizational units was unclear and not documented); and roles, responsibilities, work products, expectations, resources, and accountability of external stakeholder organizations were not well- defined. Notwithstanding these weaknesses, program officials told us that their self-assessments show that they have made incremental progress in implementing the 113 practices associated with the six key processes. (See figure 5 for US-VISIT’s progress in implementing these practices.) However, they also recently decided to postpone indefinitely the planned October 2006 independent appraisal. Instead, the program intends to perform quarterly internal assessments until the results show that they can pass an independent appraisal. Further, the program has not committed to a revised target date for having an external appraisal. The acquisition management weaknesses in the six key process areas are exacerbated by weaknesses in other areas. For example, we recently reported that the US-VISIT contract tracking and oversight process suffers from a number of weaknesses. Specifically, we reported that the program had not effectively overseen US-VISIT-related contract work performed on its behalf by other DHS and non-DHS agencies, and these agencies did not always establish and implement the full range of controls associated with effective management of contractor activities. Further, the program office and other agencies did not implement effective financial controls. In particular, the program office and other agencies managing US-VISIT–related work were unable to reliably report the scope of contracting expenditures. In addition, some agencies improperly paid and accounted for related invoices, including making a duplicate payment and making payments for non-US-VISIT services from funds designated for US- VISIT. Fully and effectively implementing the above discussed key acquisition management and related controls takes considerable time. However, considerable time has elapsed since we first recommended establishment of these controls and they are not yet operational and it is unclear when they will be. Therefore, it is important that these improvement efforts stay on track. Until these capabilities are in place, the program risks not meeting its stated goals and commitments. US-VISIT has not yet implemented other key management practices, such as developing and implementing a security plan and employing an EVM system to help manage and control program cost and schedule. As we previously reported, the program’s 2004 security plan generally satisfied OMB and the National Institute of Standards and Technology security guidance. Further, the fiscal year 2006 expenditure plan states that all of the US-VISIT component systems have been certified and accredited and given full authority to operate. However, the 2004 security plan preceded the US-VISIT risk assessment, which was not completed until December 2005, and the security plan was not updated to reflect this risk assessment. According to program officials, they intend to develop a security strategy by the end of 2006 that reflects the risk assessment. We have ongoing work for the Senate Committee on Homeland Security and Governmental Affairs to review the information security controls associated with computer systems and networks supporting the US-VISIT program. Regarding EVM, the program is currently relying on the prime contractor’s EVM system to manage the prime contractor’s progress against cost and schedule goals. According to the fiscal year 2006 expenditure plan, the program office has assessed the prime contractor’s EVM system against relevant standards. However, in reality, this EVM system was self-certified by the prime contractor in December 2003 as meeting established standards. OMB requires that agencies verify contractor self-certifications. The program office has yet to do this, although program officials told us that they plan to retain the services of another contractor to perform this validation. This needs to be done quickly. Our review of the integrated baseline review, which agencies are required by OMB to complete to ensure that the EVM program baseline is accurate, showed that it did not address key baseline considerations, such as cost and schedule risks. Moreover, other US-VISIT contractors have not been required to use EVM, although program officials told us that this was to change effective October 1, 2006. To ensure that programs manage their performance effectively, it is important that they define and measure progress against program commitments and hold themselves accountable for results. Measurements of the operational performance, progress, and results are important to reasonably ensure that problems and shortfalls can be addressed and resolved in a timely fashion and so that responsible parties can be held accountable. More specifically, to permit meaningful program oversight, it is important that expenditure plans describe how well DHS is progressing against the commitments made in prior expenditure plans. However, US-VISIT’s expenditure plan for fiscal year 2006 (the fifth expenditure plan) continued a longstanding pattern of not describing progress against commitments made in previous plans. For example, according to the fiscal year 2005 expenditure plan, the prime contractor was to begin integrating the long-term Increment 4 strategy into the interim US-VISIT system’s environment and the overall DHS enterprise architecture, and that US- VISIT and the prime contractor would work with the stakeholder community to identify opportunities for delivery of long-term capabilities under Increment 4. However, the fiscal year 2006 plan does not discuss progress or accomplishments relative to these commitments. Additionally, the expenditure plan committed to begin deploying the most effective exit alternative for capturing biometrics at air and sea POEs during fiscal year 2005. In contrast, the 2006 expenditure plan states that the exit pilots will continue throughout fiscal year 2006 and does not address whether the fiscal year 2005 schedule deployment commitment was met. Also, the fiscal year 2006 expenditure plan did not address all performance measures cited in the fiscal year 2005 plan. Specifically, the 2005 plan included 11 measures. In contrast, the 2006 plan listed 7 measures, 4 of which are similar, but not identical to, some of the 11 measures in the 2005 plan. This means that several of the 2005 plan’s measures are not addressed in the 2006 plan. Moreover, even in cases of similar performance measures, the fiscal year 2006 plan does not adequately describe progress in meeting commitments. For example, the fiscal year 2005 expenditure plan cited a performance measurement of “Pre-entry watch list hits on biometrically enabled visa applications.” The fiscal year 2006 plan cites the performance measure of “Number of biometric watch list hits for visa applicants processed at consular offices.” According to the latter plan, in fiscal year 2005 there were 897 such hits; however, neither plan cites a performance target against which to gauge progress, assuming that the two performance measures mean the same thing. Without such measurements, program performance and accountability can suffer. Developing and deploying complex technology that records the entry and exit of millions of visitors to the United States, verifies their identities to mitigate the likelihood that terrorists or criminals can enter or exit at will, and tracks persons who remain in the country longer than authorized is a worthy goal in our nation’s effort to enhance border security in a post-9/11 era. But doing so also poses significant challenges; foremost among them is striking a reasonable balance between US-VISIT’s goals of providing security to U.S. citizens and visitors while facilitating legitimate trade and travel. DHS has made considerable progress making the entry portion of the US- VISIT program at air, sea and land POEs operational, but our work raised questions whether DHS has adequately assessed how US-VISIT has affected operations at land POEs. Because US-VISIT will likely continue to have an impact on land POE facilities as it evolves—especially as new technology and equipment are introduced—it is important for US-VISIT and CBP officials to have sufficient management controls for identifying and reporting potential computer and other operational problems that could affect the ability of US-VISIT entry capability to operate as intended. With respect to DHS’s effort to create an exit verification capability, developing and deploying this capability at land POEs has posed a set of challenges that are distinct from those associated with entry. US-VISIT has not determined whether it can achieve, in a realistic time frame, or at an acceptable cost, the legislatively mandated capability to record the exit of travelers at land POEs using biometric technology. Apart from acquiring new facilities and infrastructure at an estimated cost of billions of dollars, US-VISIT officials have acknowledged that no technology now exists to reliably record travelers’ exit from the country, and to ensure that the person leaving the country is the same person who entered, without requiring that person to stop upon exit—potentially imposing a substantial burden on travelers and commerce. US-VISIT officials stated that they believe a biometrically based solution that does not require those exiting the country to stop for processing, that minimizes the need for major facility changes, and that can be used to definitively match a visitor’s entry and exit will be available in 5 to 10 years. In the interim, it remains unclear how DHS plans to proceed. According to statute, DHS was required to report more than a year ago on its plans for developing a comprehensive biometric entry and exit system, but DHS has yet to finalize this road map for Congress. Until DHS finalizes such a plan, neither Congress nor DHS is likely to have sufficient information as a basis for decisions about various factors relevant to the success of US-VISIT, ranging from funding needed for any land POE facility modifications in support of the installation of exit technology to the trade-offs associated with ensuring traveler convenience while providing verification of travelers’ departure consistent with US- VISIT’s national security and law enforcement goals. Fundamental questions about the program’s future direction and fit within the larger homeland security context as well as its return on investment remain unanswered. Moreover, the program is overdue in establishing the means to ensure that it is pursuing the right US-VISIT solution, and that it is managing it the right way. The longer the program proceeds without these, the greater the risk that the program will not optimally support mission operations and will fall short of commitments. Measuring and disclosing the extent to which these commitments are being met are also essential to holding the department accountable. We look forward to continuing to work constructively with the US-VISIT program to better ensure the program’s success. This concludes my prepared testimony. I would be happy to respond to any questions that Members of the Committee may have. For further information about this testimony, please contact me at (202) 512-8777 or [email protected], or Randolph Hite, Director, at (202) 512-3439 or [email protected]. Other major contributors to this testimony include John Mortin, Assistant Director; Deborah Davis, Assistant Director; Amy Bernstein; Frances Cook; Odi Cuero; David Hinchman; James Houtz; Richard Hung; Sandra Kerr; Amanda Miller; Freda Paintsil; James R. Russell; Sushmita Srikanth; and Jonathan Tumin. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | This testimony summarizes GAO's work on the Department of Homeland Security's (DHS) efforts to implement the U.S. Visitor and Immigrant Status Indicator Technology (US-VISIT) program at air, sea, and land ports of entry (POE). US-VISIT is designed to collect, maintain, and share data on selected foreign nationals entering and exiting the United States at air, sea, and land POEs. These data, including biometric identifiers like digital fingerprints, are to be used to screen persons against watch lists, verify identities, and record arrival and departure. This testimony addresses DHS's efforts to (1) implement US-VISIT entry capability, (2) implement US-VISIT exit capability, and (3) resolve longstanding management challenges that could impair DHS's ability to effectively implement the US-VISIT program. GAO analyzed DHS and US-VISIT documents, interviewed program officials, and visited 21 land POEs with varied traffic levels on both borders. DHS is operating US-VISIT entry capabilities at most POEs and has begun to work to move from 2 to 10 fingerprint biometric capabilities and expand electronic information sharing with stakeholders. Of particular note is the fact that a US-VISIT biometric-based entry screening capability is operating at 115 airports, 14 seaports, and 154 land POEs. While US-VISIT has improved DHS's ability to process visitors and verify identities upon entry, we found that management controls in place to identify and evaluate computer and other operational problems at land POEs were insufficient and inconsistently administered. Although US-VISIT has conducted various exit demonstration projects at a small number of POEs, a biometric exit capability is not currently available. According to program officials, this is due to a number of factors. For example, at this time the only proven technology available for biometric land exit verification would necessitate mirroring the processes currently in use for entry at these POEs, which would create costly staffing demands and infrastructure requirements, and introduce potential trade, commerce, and environmental impacts. Further, a pilot project to examine an alternative technology at land POEs did not produce a viable solution. By statute, DHS was to have reported to Congress by June 2005 on how it intended to fully implement a comprehensive, biometric entry/exit program, but DHS had not yet reported how it intended to do so, or use nonbiometric solutions. DHS continues to face longstanding US-VISIT management challenges and future uncertainties. For example, DHS had not articulated how US-VISIT is to strategically fit with other land border security initiatives and mandates and could not ensure that these programs work in harmony to meet mission goals and operate cost effectively. DHS had drafted a strategic plan defining an overall immigration and border management strategy but, as of February 2007, the plan was under review by the Office of Management and Budget. Further, critical acquisition management processes need to be established and followed to ensure that program capabilities and expected mission outcomes are delivered on time and within budget. These processes include effective project planning, requirements management, contract tracking and oversight, test management, and financial management. Until these issues are addressed, the risk of US-VISIT continuing to fall short of expectations is increased. |
IT can enrich people’s lives and improve organizational performance. For example, during the last two decades, the Internet has matured from being a means for academics and scientists to communicate with each other to being a national resource where citizens can interact with their government in many ways, including receiving services and supplying and obtaining information. While investments in IT have the potential to improve lives and organizations, some federally funded IT projects can—and have— become risky, costly, unproductive mistakes. As part of a comprehensive effort to increase the operational efficiency of federal technology assets and deliver greater value to the American taxpayer, federal agencies are shifting to the deployment of cloud services. Cloud computing takes advantage of several broad evolutionary trends in IT, including the use of virtualization. According to NIST, cloud computing is a means “for enabling convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort or service provider interaction.” NIST also states that an application should possess five essential characteristics to be considered cloud computing: on-demand self service, broad network access, resource pooling, rapid elasticity, and measured service.applications are network-based and scalable on demand. Essentially, cloud computing According to OMB, cloud computing brings a wide range of benefits: Economical: cloud computing is a pay-as-you-go approach to IT, in which a low initial investment is required to begin, and additional investment is needed only as system use increases. Flexible: IT departments that anticipate fluctuations in user demand no longer need to scramble for additional hardware and software. With cloud computing, they can add or subtract capacity quickly and easily. Fast: cloud computing eliminates long procurement and certification processes, while providing a near-limitless selection of services. According to NIST, cloud computing offers three service models: Infrastructure as a service—the service provider delivers and manages the basic computing infrastructure of servers, software, storage, and network equipment upon which a platform (i.e., operating system and programming tools and services) to develop and execute applications can be developed by the consumer. Platform as a service—the service provider delivers and manages the underlying infrastructure (i.e., servers, software, storage, and network equipment), as well as the platform (i.e., operating system, and programming tools and services) upon which the consumer can create applications using programming tools supported by the service provider or other sources. Software as a service—the service provider delivers one or more applications and the computational resources and underlying infrastructure to run them for use on demand as a turnkey service. As can be seen in figure 1 below, each service model offers unique functionality, with consumer control of the environment decreasing from infrastructure to platform to software. NIST has also defined four deployment models for providing cloud services: private, community, public, and hybrid. In a private cloud, the service is set up specifically for one organization, although there may be multiple customers within that organization and the cloud may exist on or off the customer’s premises. In a community cloud, the service is set up for organizations with similar requirements. The cloud may be managed by the organizations or a third party and may exist on or off the organization’s premises. A public cloud is available to the general public and is owned and operated by the service provider. A hybrid cloud is a composite of two or more of the above deployment models (private, community, or public) that are bound together by standardized or proprietary technology that enables data and application portability. According to federal guidance, these deployment models determine the number of consumers (tenancy), and the nature of other consumers’ data that may be present in a cloud environment. A public cloud should not allow a consumer to know or control other consumers of a cloud service provider’s environment. However, a private cloud can allow for ultimate control in selecting who has access to a cloud environment. Community clouds and hybrid clouds allow for a mixed degree of control and knowledge of other consumers. Additionally, the cost for cloud services typically increases as control over other consumers and knowledge of these consumers increase. According to OMB, the federal government needs to shift from building custom systems to adopting cloud technologies and shared solutions, which will improve the government’s operational efficiencies and result in substantial cost savings. To achieve these benefits, OMB required agencies to immediately shift to a “Cloud First” policy and increase their use of available cloud and shared services whenever a secure, reliable, and cost-effective cloud solution exists. In order to accelerate the adoption of cloud computing solutions across the government, OMB made cloud computing an integral part of its 25 Point Implementation Plan to Reform Federal Information Technology Management. The plan specified six major goals: align the acquisition process with the technology cycle, align the budget process with the technology cycle, and apply “light technology” and shared solutions.strengthen program management, streamline governance and improve accountability, increase engagement with industry, To achieve these goals, the plan outlines 25 action items, such as completing plans to consolidate 800 data centers by 2015 and developing a governmentwide strategy to hasten the adoption of cloud computing. To accelerate the shift to cloud computing, OMB required agencies to identify, plan, and fully migrate three services to a cloud solution by June 2012. In February 2011, OMB issued the Federal Cloud Computing Strategy, as called for in its 25-Point Plan. The strategy provides definitions of cloud computing; benefits of cloud computing, such as accelerating data center consolidations; a decision framework for migrating services to a cloud environment; case studies to support agencies’ migration to cloud computing; and roles and responsibilities for federal agencies. For example, the strategy states that NIST’s role is to lead and collaborate with federal, state, and local government agency CIOs, private sector experts, and international bodies to identify and prioritize cloud computing standards and guidance. Further, the strategy notes that an estimated $20 billion of the federal government’s $80 billion in annual IT spending is a potential target for migration to cloud computing solutions. In a December 2011 memo, OMB established the Federal Risk and Authorization Management Program (FedRAMP), a governmentwide program to provide joint authorizations and continuous security monitoring services for all federal agencies. Among other things, the memo required the General Services Administration’s (GSA) FedRAMP program management office to publish a concept of operations, which was completed in February 2012. The concept of operations states that FedRAMP is to: ensure that cloud-based services have adequate information security; eliminate the duplication of effort and reduce risk management costs; enable rapid and cost-effective procurement of information systems/service for federal agencies. Further, the FedRAMP program is to assess and grant cloud service providers provisional authorization to provide cloud services governmentwide. Agencies can leverage the provisional authorization to minimize certification and accreditation processes. FedRAMP reached initial operational capabilities in June 2012 and is to be fully operational in fiscal year 2014. Consistent with OMB’s Cloud Computing Strategy, NIST has issued several key publications related to standards and security. For example: NIST Special Publication (SP) 500-291, NIST Cloud Computing Standards Roadmap identifies current standards, standards gaps, and standardization priorities. For example, it describes the status of cloud computing standards for interoperability, portability, and security. NIST SP 500-292, NIST Cloud Computing Reference Architecture presents the NIST Cloud Computing Reference Architecture and Taxonomy to communicate the components and offerings of cloud computing. The architecture is presented in two parts: (1) a complete overview of roles; and (2) the necessary components for managing and providing cloud services, such as service deployment, service orchestration, cloud service management, security and privacy. NIST SP 800-144, Guidelines on Security and Privacy in Public Cloud Computing provides an overview of public cloud computing and the security and privacy considerations involved. Specifically, the document describes the threats, technology risks, and safeguards surrounding public cloud environments, and their treatment. NIST SP 800-145, The NIST Definition of Cloud Computing defines cloud computing in terms of essential characteristics, service models, and deployment models. NIST is also working on the Cloud Computing Technology Roadmap (SP 500-293), which is to describe cloud computing security challenges and high-priority gaps for which new or revised standards, guidance, and technology need to be developed.plans to publish the roadmap by the end of 2012. According to NIST officials, NIST In February 2012, the CIO Council and the Chief Acquisition Officers Council issued guidance for acquiring IT in a cloud environment. The guidance identifies 10 key areas unique to federal agencies’ procurement of cloud services that require improved collaboration and alignment during the contracting process. The 10 areas are: Selecting a cloud service—choosing the appropriate cloud service and deployment model. Cloud service provider and end-user agreements—terms of service, and service provider and end-user agreements need to be fully integrated into cloud contacts. Service-level agreements—agreements need to define performance with clear terms and definitions, demonstrate how performance is being measured, and identify what enforcement mechanisms are in place to ensure the conditions are met. Roles and responsibilities—cloud service provider, agency, and integrator roles and responsibilities should be clearly defined. Standards—NIST’s cloud reference architecture should be used for cloud procurements. Security—requirements for the service provider to maintain the security and integrity of the agency data must be clearly defined. Privacy—privacy risks and responsibilities need to be addressed in the contract between federal agencies and service providers. E-discovery—service providers need to be aware of the need to locate, preserve, collect, process, review, and produce electronically stored information in the event of civil litigation or investigation. Freedom of Information Act (FOIA)—all relevant data must be available for appropriate handling under the act. E-records—agencies need to ensure that service providers understand the federal agencies obligations under the Federal Records Act. More recently in May 2012, OMB issued its shared services strategy, as According to OMB, this strategy is to help called for in its 25-Point Plan.federal agencies (1) improve return on investment across the agency’s IT portfolio, (2) close productivity gaps by implementing integrated governance processes and innovative IT service solutions, and (3) increase communications with stakeholders to ensure transparency, accountability, and collaboration in the full life cycle of IT shared services. To facilitate these improvements, the strategy provides definitions, concepts, and critical success factors to be considered when implementing IT shared services; an implementation strategy; and a federal governance structure to support federal agencies’ shared services development and implementation efforts. In May 2010, we reported on the efforts of multiple agencies to ensure the We noted that while OMB, security of governmentwide cloud computing.GSA, and NIST had initiated efforts to ensure secure cloud computing, significant work remained to be completed. For example, OMB had not yet finished a cloud computing strategy; GSA had begun a procurement for expanding cloud computing services, but had not yet developed specific plans for establishing a shared information security assessment and authorization process; and NIST had not yet issued cloud-specific security guidance. We made several recommendations to address these issues. Specifically, we recommended that OMB establish milestones to complete a strategy for federal cloud computing and ensure it addressed information security challenges. These include having a process to assess vendor compliance with government information security requirements and the division of information security responsibilities between the customer and vendor. OMB subsequently published a strategy in February 2011 that addressed the importance of information security when using cloud computing, but did not fully address several key challenges confronting agencies, such as the appropriate use of attestation standards for control assessments of cloud computing service providers, and the division of information security-related responsibilities between customer and provider. We also recommended that GSA consider security in its procurement for cloud services, including consideration of a shared assessment and authorization process. GSA has since developed its FedRAMP program, an assessment and authorization process for systems shared among federal agencies. Finally, we recommended that NIST issue guidance specific to cloud computing security. As noted previously, NIST has since issued multiple publications that address such guidance. More recently, in October 2011, we testified that 22 of 24 major federal agencies reported that they were either concerned or very concerned about the potential information security risks associated with cloud computing. These risks include being dependent on the security practices and assurances of vendors and the sharing of computing resources. We stated that these risks may vary based on the cloud deployment model. Private clouds, whereby the service is set up specifically for one organization, may have a lower threat exposure than public clouds, whereby the service is available to any paying customer. Evaluating this risk requires an examination of the specific security controls in place for the cloud’s implementation. We also reported that the Federal CIO Council had established a cloud computing Executive Steering Committee to promote the use of cloud computing in the federal government, with technical and administrative support provided by GSA’s cloud computing program management office, but had not finalized key processes or guidance. The subgroup had worked with its members to define interagency security requirements for cloud systems and services and related information security controls. Additionally, in April 2012, we reported that more needed to be done to implement OMB’s 25-Point Plan and measure its results. Among other things, we reported that of the 10 key action items that we reviewed, 3 had been completed and 7 had been partially completed by December 2011. In particular, OMB and agencies’ cloud-related efforts only partially addressed requirements. Specifically, agencies’ plans were missing key elements, such as a discussion of needed resources, migration schedules, or plans for retiring legacy systems. As a result, we recommended, among other things, that the Secretaries of Homeland Security, Veterans Affairs, and the Attorney General direct their respective CIOs to complete elements missing from the agencies’ plans for migrating services to a cloud computing environment. In comments on a draft of this report, each of the agencies generally agreed with our recommendations. OMB requires federal agencies to immediately shift to a “Cloud First” policy by implementing cloud-based solutions whenever a secure, reliable, and cost-effective cloud option exists. To accelerate the shift, OMB required agencies, by February 2011, to identify three IT services to be migrated to a cloud solution and develop a plan for each of the three services, migrate one of the services to a cloud-based solution by December 2011, and migrate the remaining services by June 2012. According to OMB’s 25-Point Plan, migrating these services was intended to build capabilities and momentum in the federal agencies, and to act as a catalyst for agencies to migrate additional services to cloud-based solutions in order to improve the government’s operational efficiency and to reduce operating costs. Each of the seven agencies we reviewed has made progress implementing OMB’s “Cloud First” policy. Each agency has incorporated cloud computing requirements into its policies and processes. For example, the Department of State (State) incorporated into its plan a review of its IT investment portfolio to identify candidates for cloud solutions. Similarly, the Department of Agriculture (USDA) identified cloud computing as a high-priority initiative and adopted the “Cloud First” policy of migrating existing, or offering new, IT services to a cloud-based environment. The agency is also developing and deploying an infrastructure to offer cloud-based services to other government departments and agencies. Each agency identified at least three services by February 2011 to implement in a cloud environment and reported that the agency had implemented at least one cloud service by December 2011. Agencies selected the services based on a mix of criteria, including (1) services that had already been implemented in a cloud environment or were in the process, (2) risk to mission functionality, and (3) maturity of the cloud solutions. In selecting the services, most agencies chose existing services, while others developed and implemented new services. Specifically, of the 21 services selected, 13 were migrations of existing functionality and 8 were new services. The most commonly identified services were e-mail, website hosting, and collaboration services. Further, five agencies reported implementing more than one cloud service by December 2011, with four agencies reporting to have implemented cloud-based services prior to December 2010, which was when OMB issued its 25-Point Plan. In addition, two of the seven agencies do not plan to meet OMB’s deadline to implement three cloud solutions by June 2012. Specifically, USDA plans to complete its Document Management and Correspondence Tracking system in September 2012 and the Small Business Administration (SBA) plans to complete one of its services in August 2012 and another in December 2012. While DHS does not plan to implement four of its services until after June 2012, officials reported that it implemented four services by December 2011 and two services by June 2012. See figure 2 for the cloud-based services by agency and service type; and reported planned and implementation dates. While each agency submitted plans to OMB for its selected services, all but 1 of the 20 plans submitted to OMB were missing one or more key required elements. In its 25-Point Plan, OMB required agencies to prepare a plan for implementing each cloud-based service and retiring the associated legacy system. According to OMB, each plan is to contain, among other things, estimated costs of the service, major milestones, and performance goals. However, only 1 plan fully met the key elements as required. For example, of the 20 plans, 7 did not include estimated costs, 5 did not include major milestones, and 11 did not include performance goals. Further, none of the 14 projects migrating existing services included plans to retire the associated legacy systems. See table 1 for our assessment of key elements of the agencies’ plans. While agencies did not include all of these elements in the plans provided to OMB, three agencies later reported that they had estimated costs for five of the seven services. According to agency officials, information was missing because it was not available at the time the plans were submitted to OMB or it was deemed not to be relevant. While developing milestones for services already implemented would appear to add little value, it remains important that agencies develop cost estimates, performance goals, and plans to retire associated legacy systems. Doing so would enable agencies to measure performance and determine whether the cloud-based solution is cost effective, and ensure that savings generated from retiring systems are realized. Additionally, each of the agencies identified opportunities for future cloud implementations. For example, GSA officials stated that GSA is considering migrating its storage and help desk services to the cloud, while State officials stated that the agency is considering moving its development environment to a cloud solution. Further, USDA is currently offering a portfolio of cloud services to other agencies through its National Information Technology Center, which, according to USDA officials, is working to provide competitive and scalable services to federal agencies. As agencies implement these and other cloud-based solutions, identifying key information—cost estimates, milestones, performance goals, and legacy system retirement plans—will also be essential in determining whether their activities constitute a positive return on investment, and therefore, whether the benefits of their activities will be fully realized. In transitioning to cloud-based solutions, officials in the agencies we reviewed stated that they encountered challenges that may impede their ability to realize the full benefits of cloud-based solutions: Meeting federal security requirements: Cloud vendors may not be familiar with security requirements that are unique to government agencies, such as continuous monitoring and maintaining an inventory of systems. For example, State officials described their ability to monitor their systems in real time, which they said cloud service providers were unable to match. Treasury officials also explained that the Federal Information Security Management Act’s requirement of maintaining a physical inventory is challenging in a cloud environment because the agency does not have insight into the provider’s infrastructure and assets. Obtaining guidance: Existing federal guidance for using cloud services may be insufficient or incomplete. Agencies cited a number of areas where additional guidance is needed such as purchasing commodity IT and assessing Federal Information Security Management Act security levels.Plan required agencies to move to cloud-based solutions before guidance on how to implement it was available. As a result, some HHS operating divisions were reluctant to move to a cloud environment. In addition, Treasury officials noted confusion over NIST definitions of the cloud deployment models, but noted that recent NIST guidance has been more stable. For example, an HHS official noted that the 25-Point Acquiring knowledge and expertise: Agencies may not have the necessary tools or resources, such as expertise among staff, to implement cloud solutions. DHS officials explained that delivering cloud services without direct knowledge of the technologies has been difficult. Similarly, an HHS official stated that teaching their staff an entirely new set of processes and tools—such as monitoring performance in a cloud environment—has been a challenge. Certifying and accrediting vendors: Agencies may not have a mechanism for certifying that vendors meet standards for security, in part because the Federal Risk and Authorization Management Program (FedRAMP) had not yet reached initial operational capabilities. For example, GSA officials stated that the process to certify Google to meet government standards for their migration to cloud-based e-mail was a challenge. They explained that, contrary to traditional computing solutions, agencies must certify an entire cloud vendor’s infrastructure. In Google’s case, it took GSA more than a year to certify more than 200 Google employees and the entire organization’s infrastructure (including hundreds of thousands of servers) before GSA could use Google’s service. Ensuring data portability and interoperability: To preserve their ability to change vendors in the future, agencies may attempt to avoid platforms or technologies that “lock” customers into a particular product. For example, a Treasury official explained that it is challenging to separate from a vendor, in part due to a lack of visibility into the vendor’s infrastructure and data. Overcoming cultural barriers: Agency culture may act as an obstacle to implementing cloud solutions. For example, a State official explained that public leaks of sensitive information have put the agency on a more risk-averse footing, which makes it more reluctant to migrate to a cloud solution. Procuring services on a consumption (on-demand) basis: Because of the on-demand, scalable nature of cloud services, it can be difficult to define specific quantities and costs. These uncertainties make contracting and budgeting difficult due to the fluctuating costs associated with scalable and incremental cloud service procurements. For example, HHS officials explained that it is difficult to budget for a service that could consume several months of budget in a few days of heavy use. Recently issued federal guidance and initiatives recognize many of these challenges. For example, OMB’s Federal Cloud Computing Strategy recognizes the challenge of data portability and interoperability and notes that agencies should consider the availability of technical standards for cloud interfaces that reduce the risk of vendor lock-in. Similarly, several NIST publications—such as their Guidelines on Security and Privacy in Public Cloud Computing and Cloud Computing Reference Architecture— address portability, interoperability, and security standards, and NIST plans to issue additional guidance on cloud computing security, among other things. In addition, the FedRAMP program is to create processes for security authorizations and allow agencies to leverage security authorizations on a governmentwide basis in an effort to streamline the certification and accreditation processes. Selected agencies have made progress implementing OMB’s “Cloud First” policy. In particular, agencies have incorporated cloud solutions into their IT and investment management policies and processes, and implemented one or more services in a cloud environment by December 2011. Two agencies do not plan to meet OMB’s requirement to fully implement three services to a cloud environment by June 2012, but plan to do so by year end. Further, agencies’ plans for implementing these services were often missing key information, such as performance goals or legacy system retirement plans. Without complete information, agencies are not in a position to know whether the implementation of the selected services was cost-effective and whether the cost savings generated from retiring legacy systems were realized. Going forward, as agencies implement additional cloud-based solutions, it is important that, at a minimum, they develop estimated costs, milestones, performance goals, and plans for retiring relevant legacy systems. Until agencies’ cloud implementations are sufficiently planned and relevant systems are retired, the benefits of federal efforts to implement cloud solutions—improved operational efficiencies and reduced costs associated with retiring legacy systems— may be delayed or not fully realized. Additionally, agencies are facing a series of challenges as they implement cloud solutions. Recent guidance and initiatives may help to mitigate the impact of these challenges. Further, these initiatives may help agencies assess their readiness to implement cloud-based solutions and guide their implementation. To help ensure the success of agencies’ implementation of cloud-based solutions, we are recommending that the Secretaries of Agriculture, Health and Human Services, Homeland Security, State, and the Treasury; and the Administrators of the General Services Administration and Small Business Administration direct their respective CIOs to take the following two actions: establish estimated costs, performance goals, and plans to retire associated legacy systems for each cloud-based service discussed in this report, as applicable; and develop, at a minimum, estimated costs, milestones, performance goals, and plans for retiring legacy systems, as applicable, for planned additional cloud-based services. We received comments on a draft of this report from all seven departments and agencies in our review, as well as from OMB and NIST. The Departments of Agriculture, Homeland Security, and Treasury, and the GSA agreed with our recommendations; the Department of State agreed with our second recommendation and disagreed with our first recommendation; and HHS and SBA did not agree or disagree with our recommendations. Each agency’s comments are discussed in more detail below. In written comments, USDA’s Acting CIO stated that the department concurred with the content of the report and had no comments. USDA’s written comments are provided in appendix III. In written comments, the Director of DHS’s GAO-OIG Liaison Office concurred with our recommendations and described ongoing and planned actions to address them. DHS’s written comments are provided in appendix IV. The department also provided technical comments, which we have incorporated in the report as appropriate. In comments provided via e-mail, Treasury’s Deputy Assistant Secretary for Information Systems stated that the department agreed with the report and had no comments. In written comments, GSA’s Acting Administrator agreed with our findings and recommendations, and stated that GSA will take action as appropriate. GSA’s written comments are provided in appendix V. In written comments, State’s Chief Financial Officer concurred with our recommendation to develop cost estimates, milestones, performance goals, and plans for retiring legacy systems for its planned cloud-based services. The department stated that it has established an annual requirement for all programs and initiatives to conduct an alternative analysis for retiring legacy systems and using cloud-based services, if feasible. The analysis includes the development of estimated costs, milestones, performance goals, and legacy system retirement plans. The department disagreed with our recommendation to establish cost estimates, performance goals, and plans to retire associated legacy systems for each of the department’s cloud-based services discussed in this report, noting that these services did not have associated legacy systems to be retired. In a clarifying conversation, the Division Chief, Bureau of Information Resource Management, explained that one of the two migrated services ran on a virtual machine that hosts many other programs, and the other service transitioned from internally-managed software to a cloud-based service, neither of which required the retirement of an existing system. We acknowledge that a retirement plan may not be applicable for these two services; however, our recommendation is not focused solely on the need for legacy retirement plans, but also identifies the need to establish cost estimates and performance goals for each cloud-based service discussed in this report. As stated in this report, State did not establish performance goals for its electronic library service. Performance goals help to set priorities and drive progress toward key outcomes, thus enabling agencies to measure performance and determine whether the acquired cloud-based service is performing as intended and achieving the desired outcome. Therefore, we believe that the recommendation is applicable and relevant to the department. State’s written comments are provided in appendix VI. In comments provided via e-mail, HHS’s Office of the Assistant Secretary for Legislation stated that the department did not have any general or technical comments on the report. In comments provided via e-mail, SBA’s Office of Congressional and Legislative Affairs stated that the agency had no comments on the draft report and that SBA would work to implement the recommendations. OMB and NIST provided technical comments, which we have incorporated as appropriate. We are sending copies of this report to interested congressional committees; the Secretaries of Agriculture, Commerce, Health and Human Services, Homeland Security, State, and the Treasury; the Administrators of the General Services Administration and Small Business Administration; the Director of the Office of Management and Budget; and other interested parties. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9286 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. Our objectives were to (1) assess the progress selected agencies have made in implementing the federal “Cloud First” policy and (2) identify challenges selected agencies are facing as they implement the policy. To address our first objective, we first categorized agencies by the size of their information technology (IT) budget: large (more than $3 billion), medium ($1-3 billion), and small (less than $1 billion), as reported in the Office of Management and Budget’s (OMB) fiscal year 2011 Exhibit 53. We then selected agencies from each budget category to include (1) a mix of services (e.g., e-mail, collaboration, and website hosting) that agencies had proposed moving to the cloud and (2) agencies that were cited by OMB as having successfully implemented a cloud solution. Seven agencies were selected: the Departments of Agriculture (USDA), Health and Human Services (HHS), Homeland Security (DHS), State, and the Treasury; and the General Services Administration (GSA) and the Small Business Administration (SBA). We analyzed documentation from the selected agencies, including project plans and progress reports, which described the actions agencies have taken to migrate services to a cloud solution. We also compared agencies’ migration plans to OMB’s associated guidance to determine any variances. We interviewed officials responsible for implementing the cloud solutions to determine how the services were selected and migrated. Finally, we interviewed officials from the National Institute of Standards and Technology (NIST) and OMB to understand cloud computing standards, requirements, and guidance for federal agencies. To address our second objective, we interviewed officials from each of the selected agencies and asked them to describe challenges associated with their implementation of cloud solutions. Because of the open-ended nature of our discussions with agency officials, we conducted a content analysis of the information we received in order to identify and categorize common challenges. To do so, two team analysts independently reviewed and drafted a series of challenge statements based upon each agency’s records. They then worked together to resolve any discrepancies, choosing to report on challenges that were identified by two or more agencies. These common challenges were presented in the report. Finally, we compared the challenges to OMB’s Federal Cloud Computing Strategy and the Chief Information Officers Council’s and Chief Acquisition Officers Council’s cloud computing guidance to determine the extent to which they were addressed. We conducted this performance audit from October 2011 through July 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix provides information on the services the selected federal agencies chose to migrate to cloud solutions. Specifically, this appendix includes a brief description of the three cloud services, as well as the service model, deployment model, and Federal Information Security Management Act of 2002 (FISMA) security level. Data Center Services: DHS is implementing a private cloud within two of its data centers to enhance sharing sensitive information across the department. The private cloud encompasses multiple services (DHS committed to OMB to move eight data center services to a cloud environment) to improve collaboration and information sharing within the department. Employment Verification: This is a free service that workers can use to confirm employment eligibility in the United States. This service is to provide a mechanism by which DHS can validate identity, and control secure access to employment information. Website Hosting: This service is intended to host DHS public-facing websites and offer an enterprise content delivery capability with 100 percent availability and provide a web content management capability to manage the content across all of the public-facing websites that reside within the public cloud offering. Correspondence Tracking: This service is to allow the Federal Acquisition Service to track communication with Congress from when correspondence is received to the final processing of a response. GSA’s E-mail and Collaboration Solution: This cloud solution is to replace GSA’s legacy e-mail system, and is to provide faster upgrades and improved customer service to approximately 17,000 users. In addition, this service is a critical component of GSA’s mobile technology strategy. IT Power Management Services: This new functionality manages the power settings for more than 17,000 GSA workstations and the Office of the Chief Information Officer’s infrastructure servers. GSA estimates that it will reduce the carbon footprint by over 4.8 million carbon pounds a year by turning off computers every evening. Medwatch+: This service is to provide a web portal for reporting public safety information, as well as information by drug and biological product. This effort is comprised of three private cloud services: Safety Reporting, Device Adverse Reporting, and Drugs and Biologics Adverse Reporting. The first two services are already in cloud environments and the third is being migrated. Grants Solutions: This suite of services is available to federal agencies and grantee/applicant organizations. These services cover 14 grant award processes for federal agencies and grantee/applicant organizations through GrantSolutions.gov. Audit Resolution Tracking Management System: This “proof-of- concept” was designed to replace the Administration for Children and Families’ legacy Audit Resolution system. This service linked audit reports with the appropriate grantees and was expected to reduce hosting costs. Collaboration Services (Management and Technical Assistance Line of Business): This service is intended to encourage small business owners and small business lenders to take advantage of SBA programs, services, and loan options. Human Resources (Performance Management): This new service is to provide tools for training and performance management while reducing annual infrastructure costs. LAN/WAN, Offsite Vaulting: This is to provide online backup and recovery capabilities; and electronic vaulting for records retention. Electronic Library: This is to provide domestic and overseas agency staff with direct access to information in over 50 databases. The cloud solution is to add additional functionality including online, self-service resource check-in, check-out, and other library requests; regionalized and issue-driven electronic information portals; and an integrated electronic catalog with other online libraries. Program Management: This service is to provide program managers of the Nonproliferation and Disarmament Fund access to agency data from any location. Website Hosting: This is to provide access to keyword-searchable and downloadable government documents, unclassified publications, and databases regarding the history of State, diplomacy, and foreign relations. Business Process Management: This is to automate the Bureau of Engraving and Printing’s processes for manufacturing, financial management, acquisition, and supply chains. Document Management and Freedom of Information Act Case Management: This service is to provide the agency capabilities such as electronic capture, store, search/analyze, share, and document management. Website Hosting: This service is to provide a flexible, scalable architecture for the department’s main website and four additional websites. Collaboration Services (USDA Connect): This service is to increase interagency interaction, productivity, and efficiency by providing tools such as Profiles, Wikis, Blogs, Communities, Activities, Files, and Bookmarks for over 107,000 USDA users. Document Management and Correspondence Tracking: This is to eliminate redundancy and increased efficiency by consolidating over 20 systems to a single cloud-based customer relationship management environment to organize customer information and track correspondence throughout the agency. E-mail: This is to provide e-mail service for over 120,000 inboxes and enhanced agencywide collaboration through e-mail, instant messaging, web conferencing, and a global address list. In addition to the individual named above, the following staff also made key contributions to the report: Deborah Davis (assistant director), Shannin O’Neill (assistant director), Nancy Glover, Sandra Kerr, Emily Longcore, Andrew Stavisky, and Kevin Walsh. | As part of a comprehensive effort to increase the operational efficiency of federal technology assets, federal agencies are shifting how they deploy IT services. OMB issued a Cloud First policy in December 2010 that requires federal agencies to implement cloud-based solutions whenever a secure, reliable, and cost-effective cloud option exists; and to migrate three technology services to a cloud solution by June 2012. Cloud computing provides on-demand access to a shared pool of computing resources; can be provisioned on a scalable basis; and reportedly has the potential to deliver services faster, more efficiently, and at a lower cost than custom-developed systems. GAO was asked to (1) assess the progress selected agencies have made in implementing this policy and (2) identify challenges they are facing in implementing the policy. To do so, GAO (1) selected seven agencies, analyzed agency documentation, and interviewed agency and OMB officials; and (2) identified, assessed, and categorized common challenges. The selected federal agencies have made progress implementing the Office of Management and Budgets (OMB) Cloud First policy. Consistent with this policy, each of the seven agencies incorporated cloud computing requirements into their policies and processes. For example, one agency had incorporated a review of its information technology (IT) investment portfolio to identify candidates for a cloud solution into its IT plan. Further, each of the seven agencies met the OMB deadlines to identify three cloud implementations by February 2011 and to implement at least one service by December 2011. However, two agencies do not plan to meet OMBs deadline to implement three services by June 2012, but plan to do so by calendar year end, ranging from August to December. Each of the seven agencies has also identified opportunities for future cloud implementations, such as moving storage and help desk services to a cloud environment. While each of the seven agencies submitted plans to OMB for implementing the cloud solutions, all but one plan were missing key required elements. For example, 7 of the 20 plans did not include estimated costs and none of the plans for services that were to migrate existing functionality to a cloud-based service included plans for retiring or repurposing the associated legacy systems. According to agency officials, this was largely because the information was not available at the time the plans were developed. Until agencies cloud implementations are sufficiently planned and relevant systems are retired, the benefits of federal efforts to implement cloud solutionsimproved operational efficiencies and reduced costsmay be delayed or not fully realized. GAO identified seven common challenges associated with the implementation of OMBs Cloud First policy. Common Challenges to Cloud Computing 1. Meeting Federal Security Requirements 2. Obtaining guidance 3. Acquiring knowledge and expertise 4. Certifying and accrediting vendors 5. Ensuring data portability and interoperability 6. Overcoming cultural barriers 7. Procuring services on a consumption (on-demand) basis Recently issued federal guidance and initiatives recognize many of these challenges, such as the National Institute of Standards and Technology standards and guidance, and the General Services Administrations program to assist federal agencies certify and accredit potential cloud service providers. GAO is making recommendations to seven agencies to develop key planning information, such as estimated costs and legacy IT systems retirement plans for existing and planned services. The agencies generally agreed with GAOs recommendations. State disagreed with one recommendation, noting that legacy retirement plans were not applicable to its existing cloud services. GAO maintains that the recommendation is applicable for reasons discussed in this report. |
Leading organizations engage in broad, integrated succession planning and management efforts that focus on strengthening both current and future organizational capacity. As part of this approach, these organizations identify, develop, and select successors who are the right people, with the right skills, at the right time for leadership and other key positions. We identified specific succession planning and management practices that agencies in Australia, Canada, New Zealand, and the United Kingdom are implementing that reflect this broader focus on building organizational capacity. Collectively, these agencies’ succession planning and management initiatives demonstrated the following six practices. 1. Receive Active Support of Top Leadership. Effective succession planning and management initiatives have the support and commitment of their organizations’ top leadership. In other governments and agencies, to demonstrate its support of succession planning and management, top leadership actively participates in the initiatives. For example, each year the Secretary of the Cabinet, Ontario Public Service’s (OPS) top civil servant, convenes and actively participates in a 2-day succession planning and management retreat with the heads of every government ministry. At this retreat, they discuss the anticipated leadership needs across the government as well as the individual status of about 200 high-potential executives who may be able to meet those needs over the next year or two. Top leadership also demonstrates its support of succession planning and management when it regularly uses these programs to develop, place, and promote individuals. The Royal Canadian Mounted Police’s (RCMP) senior executive committee regularly uses the agency’s succession planning and management programs when making decisions to develop, place, and promote its top 500-600 employees, both officers and civilians. The RCMP’s executive committee, consisting of the agency’s chief executive, the chief human capital officer, and six other top officials, meets quarterly to discuss the organization’s succession needs and to make the specific decisions concerning individual staff necessary to address those needs. Lastly, top leaders demonstrate support by ensuring that their agency’s succession planning and management initiatives receive sufficient funding and staff resources necessary to operate effectively and are maintained over time. Such commitment is critical since these initiatives can be expensive because of the emphasis they place on participant development. For example, a senior human capital manager told us that the Chief Executive of the Family Court of Australia (FCA) pledged to earmark funds when he established a multiyear succession planning and management program in 2002 despite predictions of possible budget cuts facing FCA. Similarly, at Statistics Canada—the Canadian federal government’s central statistics agency—the Chief Statistician of Canada has set aside a percentage, in this case over 3 percent, of the total agency budget to training and development, thus making resources available for the operation of the agency’s four leadership and management development programs. According to a human capital official, this strong support has enabled the level of funding to remain fairly consistent over the past 10 years. 2. Link to Strategic Planning. Leading organizations use succession planning and management as a strategic planning tool that focuses on current and future needs and develops pools of high-potential staff in order to meet the organization’s mission over the long term. Succession planning and management initiatives focus on long-term goals, are closely integrated with their strategic plans, and provide a broader perspective. For example, Statistics Canada considers the human capital required to achieve its strategic goals and objectives. During the 2001 strategic planning process, the agency’s planning committees received projections showing that a majority of the senior executives then in place would retire by 2010, and the number of qualified assistant directors in the executive development pool was insufficient to replace them. In response, the agency increased the size of the pool and introduced a development program of training, rotation, and mentoring to expedite the development of those already in the pool. For RCMP, succession planning and management is an integral part of the agency’s multiyear human capital plan and directly supports its strategic needs. It also provides the RCMP Commissioner and his executive committee with an organizationwide picture of current and developing leadership capacity across the organization’s many functional and geographic lines. To achieve this, RCMP constructed a “succession room”—a dedicated room with a graphic representation of current and potential job positions for the organization’s top 500-600 employees covering its walls—where the Commissioner and his top executives meet at least four times a year to discuss succession planning and management for the entire organization. 3. Identify Talent from Multiple Organizational Levels, Early in Careers, or with Critical Skills. Effective succession planning and management initiatives identify high-performing employees from multiple levels in the organization and still early in their careers. RCMP has three separate development programs that identify and develop high-potential employees at several organizational levels. For example, beginning at entry level, the Full Potential Program reaches as far down as the front-line constable and identifies and develops individuals, both civilians and officers, who demonstrate the potential to take on a future management role. For more experienced staff, RCMP’s Officer Candidate Development Program identifies and prepares individuals for increased leadership and managerial responsibilities and to successfully compete for admission to the officer candidate pool. Finally, RCMP’s Senior Executive Development Process helps to identify successors for the organization’s senior executive corps by selecting and developing promising officers for potential promotion to the senior executive levels. The United Kingdom’s Fast Stream program targets high-potential individuals early in their civil service careers as well as recent college graduates. The program places participants in a series of jobs designed to provide experiences, each of which is linked to strengthening specific competencies required for admission to the Senior Civil Service. According to a senior program official, program participants are typically promoted quickly, attaining midlevel management in an average of 3.5 years, and the Senior Civil Service in about 7 years after that. In addition, leading organizations use succession planning and management to identify and develop knowledge and skills that are critical in the workplace. For example, Transport Canada estimated that 69 percent of its safety and security regulatory employees, including inspectors, are eligible for retirement by 2008. Faced with the urgent need to capture and pass on the inspectors’ expertise, judgment, and insights before they retire, the agency embarked on a major knowledge management initiative in 1999 as part of its succession planning and management activities. To assist this knowledge transfer effort, Transport Canada encouraged these inspectors to use human capital flexibilities including preretirement transitional leave, which allows employees to substantially reduce their workweek without reducing pension and benefits payments. The Treasury Board of Canada Secretariat, a federal central management agency, found that besides providing easy access to highly specialized knowledge, this initiative ensures a smooth transition of knowledge from incumbents to successors. 4. Emphasize Developmental Assignments in Addition to Formal Training. Leading succession planning and management initiatives emphasize developmental or “stretch” assignments for high-potential employees in addition to more formal training components. These developmental assignments place staff in new roles or unfamiliar job environments in order to strengthen skills and competencies and broaden their experience. For example, in Canada’s Accelerated Executive Development Program (AEXDP), developmental assignments form the cornerstone of efforts to prepare senior executives for top leadership roles in the public service. These assignments help enhance executive competencies by having participants perform work in areas that are unfamiliar or challenging to them in any of a large number of agencies throughout the Canadian Public Service. For example, a participant with a background in policy could develop his or her managerial competencies through an assignment to manage a direct service delivery program in a different agency. One challenge sometimes encountered with developmental assignments in general is that executives and managers resist letting their high-potential staff leave their current positions to move to another organization. Agencies in other countries have developed several approaches to respond to this challenge. For example, once individuals are accepted into Canada’s AEXDP, they are employees of, and paid by, the Public Service Commission, a central agency. Officials affiliated with AEXDP told us that not having to pay participants’ salaries makes executives more willing to allow talented staff to leave for developmental assignments and fosters a governmentwide, rather than an agency-specific, culture among the AEXDP participants. 5. Address Specific Human Capital Challenges, Such as Diversity, Leadership Capacity, and Retention. Leading organizations stay alert to human capital challenges and respond accordingly. Government agencies around the world, including in the United States, are facing challenges in the demographic makeup and diversity of their senior executives. Achieve a More Diverse Workforce. Leading organizations recognize that diversity can be an organizational strength that contributes to achieving results. For example, the United Kingdom’s Cabinet Office created Pathways, a 2-year program that identifies and develops senior managers from ethnic minorities who have the potential to reach the Senior Civil Service within 3 to 5 years. This program is intended to achieve a governmentwide goal to double (from 1.6 percent to 3.2 percent) the representation of ethnic minorities in the Senior Civil Service by 2005. Pathways provides executive coaching, skills training, and the chance for participants to demonstrate their potential and talent through a variety of developmental activities such as projects and short-term work placements. Maintain Leadership Capacity. Both at home and abroad, a large percentage of senior executives will be eligible to retire over the next several years. Canada is using AEXDP to address impending retirements of assistant deputy ministers—one of the most senior executive-level positions in its civil service. As of February 2003, for example, 76 percent of this group are over 50, and approximately 75 percent are eligible to retire between now and 2008. A recent independent evaluation of AEXDP by an outside consulting firm found the program to be successful and concluded that AEXDP participants are promoted in greater numbers than, and at a significantly accelerated rate over, their nonprogram counterparts. Increase Retention of High-Potential Staff. Canada’s Office of the Auditor General (OAG) uses succession planning and management to provide an incentive for high-potential employees to stay with the organization and thus preserve future leadership capacity. Specifically, OAG identified increased retention rates of talented employees as one of the goals of the succession planning and management program it established in 2000. Over the program’s first 18 months, annualized turnover in OAG’s high-potential pool was 6.3 percent compared to 10.5 percent officewide. This official told us that the retention of members of this high-potential pool was key to OAG’s efforts to develop future leaders. 6. Facilitate Broader Transformation Efforts. Effective succession planning and management initiatives provide a potentially powerful tool for fostering broader governmentwide or agencywide transformation by selecting and developing leaders and managers who support and champion change. For example, in 1999, the United Kingdom launched a wide- ranging reform program known as Modernising Government, which focused on improving the quality, coordination, and accessibility of the services government offered to its citizens. Beginning in 2000, the United Kingdom’s Cabinet Office started on a process that continues today of restructuring the content of its leadership and management development programs to reflect this new emphasis on service delivery. For example, the Top Management Programme supports senior executives in developing behavior and skills for effective and responsive service delivery, and provides the opportunity to discuss and receive expert guidance in topics, tools, and issues associated with the delivery and reform agenda. These programs typically focus on specific areas that have traditionally not been emphasized for executives, such as partnerships with the private sector and risk assessment and management. Preparing future leaders who could help the organization successfully adapt to recent changes in how it delivers services is one of the objectives of the FCA’s Leadership, Excellence, Achievement, Progression program. Specifically, over the last few years FCA has placed an increased emphasis on the needs of external stakeholders. This new emphasis is reflected in the leadership capabilities FCA uses when selecting and developing program participants. The program provides participants with a combination of developmental assignments and formal training opportunities that place an emphasis on areas such as project and people management, leadership, and effective change management. | Leading public organizations here and abroad recognize that a more strategic approach to human capital management is essential for change initiatives that are intended to transform their cultures. To that end, organizations are looking for ways to identify and develop the leaders, managers, and workforce necessary to face the array of challenges that will confront government in the 21st century. The Subcommittee on Civil Service and Agency Organization, House Committee on Government Reform, requested GAO to identify how agencies in four countries--Australia, Canada, New Zealand, and the United Kingdom--are adopting a more strategic approach to managing the succession of senior executives and other public sector employees with critical skills. As part of a reexamination of what the federal government should do, how it should do it, and in some cases, who should be doing it, it is important for federal agencies to focus not just on the present but also on future trends and challenges. Succession planning and management can help an organization become what it needs to be, rather than simply to recreate the existing organization. Leading organizations go beyond a succession planning approach that focuses on simply replacing individuals and engage in broad, integrated succession planning and management efforts that focus on strengthening both current and future organizational capacity. As part of this broad approach, these organizations identify, develop, and select successors who are the right people, with the right skills, at the right time for leadership and other key positions. Governmental agencies around the world anticipate the need for leaders and other key employees with the necessary competencies to successfully meet the complex challenges of the 21st century. To this end, the experiences of agencies in Australia, Canada, New Zealand, and the United Kingdom can provide insights to federal agencies, many of which have yet to adopt succession planning and management initiatives that adequately prepare them for the future. |
The base of the federal corporate income tax includes net income from business operations (receipts, minus the costs of purchased goods, labor, interest, and other expenses). It also includes net income that corporations earn in the form of interest, dividends, rent, royalties, and realized capital gains. The statutory rate of tax on net corporate income ranges from 15 to 35 percent, depending on the amount of income earned. The United States taxes the worldwide income of domestic corporations, regardless of where the income is earned, with a foreign tax credit for certain taxes paid to other countries. The timing of the tax liability depends on several factors. For example, income earned not by the domestic corporation, but by a foreign subsidiary is generally not taxed until a distribution—such as a dividend—is made to the U.S. corporation. At about $242 billion, corporate income taxes are far smaller than the $845 billion in social insurance taxes and $1.1 trillion in individual income taxes that the Office of Management and Budget (OMB) estimates were paid in fiscal year 2012 to fund the federal government. Figure 1 shows the relative distribution of federal taxes. Figures 1 and 2 show the trend in corporate income tax revenues since 1950. According to tax experts, corporate income tax revenues fell from the 1960s to the early 1980s for several reasons. For example, corporate income became a smaller share of gross domestic product (GDP) during these years, partly due to the fact that corporate debt, and therefore deductible interest payments, increased relative to corporate equity, reducing the tax base. In addition, tax expenditures, such as more generous depreciation rules also grew over that period.1980s, the corporate income tax has accounted for about 6 to 15 percent Since the early of federal revenue. Consequently, although not the largest, it remains an important source of federal revenue. Relative to GDP, the corporate income tax has ranged from a little over 1 percent to just under 2.7 percent during those same years, as shown in figure 2. The Congressional Budget Office (CBO) recently projected that despite the recent uptick, corporate income tax revenue for the next 10 years as a percentage of GDP is expected to stay within this same range. Businesses operating as publicly traded corporations in the United States are required to report the income they earn and the expenses (including taxes) they incur each year according to two separate standards. First, they must produce financial statements in accordance with generally accepted accounting principles (GAAP), based on standards established by the Financial Accounting Standards Board. Income and expense items reported in these statements are commonly known as book items. Second, in general, domestic corporations, including publicly traded corporations, must file corporate income tax returns on which they report income, expenses, and tax liabilities according to rules set out in the Internal Revenue Code (IRC) and associated Department of Treasury regulations. The measurement of business net income is inherently difficult and some components of both tax and book net income are estimates subject to some imprecision. (Net income equals total income minus expenses.) One important source of imprecision is the difficulty of measuring costs associated with the use of capital assets. Both book and tax depreciation rules allocate capital costs over the expected useful lives of different types of assets. The actual useful life of specific assets may differ from the expected lives used for purposes of either book or tax depreciation. reported on their federal tax return for that year. In financial statements, income tax expense includes the estimated future tax effects attributable to temporary differences between book and tax income. Prior to 2004, corporations were required to reconcile their book net income with tax net income reporting on Schedule M-1 of their income tax returns by comparing the book and tax return amounts of a limited number of income and expense items. Concern over the growing difference observed between pretax book net income and tax net income and the lack of detail available from the Schedule M-1 on the sources of these differences led to the development of the more extensive reporting on book-tax differences that is now required on Schedule M-3. One important concern with Schedule M-1 arose from the fact that GAAP governing which components of large multinational corporate groups need to be included in financial statements differ from tax rules that specify which of those components need to be included in consolidated tax returns. Consequently, the financial statement data that taxpayers reported on their M-1s could relate to a much different business entity from the one covered by the tax return. A Schedule M-3 filer is now required to report the worldwide income of the entity represented in its financial statements and then follow a well-defined series of steps— subtracting out income and losses of foreign and U.S. entities that are included in the financial statements but not in consolidated tax returns; adding in the income and losses of entities that are included in consolidated tax returns but not in financial statements; and making other adjustments to arrive at the book income of tax-includible entities. The Schedule M-3 also requires filers to report many more specific income and expense items according to both financial statement and tax rules than the M-1 required. The items causing the largest book-tax differences are identified later in this report. (See app. II for a copy of the Schedule M-3.) Effective tax rates on corporate income can be defined in a variety of ways, each of which provides insights into a different issue. These rates fall into two broad categories—average rates and marginal rates. An average corporate effective tax rate, which is the focus of this report, is generally computed as the ratio of taxes paid or tax liabilities accrued in a given year over the net income the corporation earned that year; it is a good summary of the corporation’s overall tax burden on income earned during that particular period. “Burden” in this context refers to what the corporation remits to the Treasury, also called statutory burden. However, statutory burden may differ from economic burden, which measures the loss of after-tax income due to a tax. The economic burden of some or all of the taxes on a corporation may be shifted to the firm’s customers or workers, as well as to other firms and other workers. Any remaining burden is borne by the corporation’s shareholders or other owners of capital. A marginal effective tax rate focuses on the tax burden associated with a specific investment (usually over the full life of that investment) and is a better measure of the effects that taxes have on incentives to invest. Effective rates differ from statutory tax rates in that they attempt to measure taxes paid as a proportion of economic income, while statutory rates indicate the amount of tax liability (before any credits) relative to taxable income, which is defined by tax law and reflects tax benefits and subsidies built into the law. The statutory tax rate of 35 percent applying to most large U.S. corporations is sometimes referred to as the “headline rate,” because it is the rate most familiar to the public. Until recently, data constraints have inhibited comparisons of effective tax rate estimates based on the alternative reporting systems. Access to tax return data is tightly restricted by law; consequently, most researchers who have estimated average effective tax rates for U.S. corporations have used either firm-level or aggregated data compiled from corporate financial statements for their measures of both tax liability and income. Even those with access to tax data could not easily determine how effective tax rates based on financial statements would differ from those based on actual tax returns because, as noted above, the scope of the business entity represented in a corporation’s financial statement can be quite different from that covered by its consolidated federal tax return. Researchers with access to data from Schedule M-3 and other parts of corporate income tax returns will now be able to directly compare effective tax rates based on the different data sources for a consistent population of large corporate income taxpayers, as we do in the following section. The two essential components of a methodology for estimating an average effective tax rate are the measure of tax liabilities to be used as the numerator of the rate and the measure of income to be used as the denominator. A common measure of tax liability used in estimates based on financial statement data has been the current tax expense—either federal only or worldwide (which comprises federal, foreign, and U.S. state and local income taxes); however, some studies have used the total tax expense, and others have used cash taxes paid during the year. Corporations that filed Schedules M-3 for tax year 2010 reported a total of $185 billion in current U.S. federal income tax expense and $225 billion in total federal income tax expense, compared to the total of $187 billion in actual tax paid after credits that they reported owing IRS for that year.data from IRS do not include a measure of cash taxes paid. The typical measure of income for effective tax rate estimates based on financial statements has been some variant of pretax net book income. Figure 3 shows the value of this book income measure for corporations that filed Schedules M-3 for tax year 2010 and shows the separate values for profitable and unprofitable filers. Profitable filers had aggregate pretax net book income of $1.4 trillion while unprofitable filers had losses totaling $315 billion, resulting in total net book income of $1.1 trillion for the full population. As these numbers suggest, average effective tax rates can vary significantly depending on the population of corporations covered by the estimate. The inclusion of unprofitable firms, which pay little if any actual tax, can result in relatively high estimates because the losses of unprofitable corporations greatly reduce the denominator of the effective rate. Such estimates do not accurately represent the tax rate on the profitable corporations that actually pay the tax. Some prior studies have excluded unprofitable corporations; others have not. Figure 3 also shows the value of two income measures defined by tax rules for the same population of taxpayers. The first measure, income (loss) before net operating loss deductions and special deductions, is the tax return measure to which Schedule M-3 filers are required to reconcile their net book income (we refer to this measure as net tax income). It represents total income minus all deductions, except for losses carried over from other tax years and the special deductions relating to intercorporate dividends. The positive values of this measure for profitable filers, negative values for unprofitable filers, and net value for all filers are all of a lower magnitude relative to book net income.measure shown in figure 3 is taxable income, which equals net tax income minus losses carried over from other years and special deductions. Taxable income is higher than tax net income for the full population of Schedule M-3 filers, even after the additional deductions, because it is defined to be no less than zero. Therefore there are no current-year losses to offset positive income amounts. For the profitable subpopulation taxable income is lower than net tax income. For tax year 2010, profitable Schedule M-3 filers actually paid U.S. federal income taxes amounting to 12.6 percent of the worldwide income that they reported in their financial statements (for those entities included in their tax returns). This tax rate is slightly lower than the 13.1 percent rate based on the current federal tax expenses that they reported in those financial statements; it is significantly lower than the 21 percent effective rate based on actual taxes and taxable income, which itself is well below the top statutory rate of 35 percent.tax rate cannot be explained by income taxes paid to other countries. Even when foreign, state, and local corporate income taxes are included in the numerator, for tax year 2010, profitable Schedule M-3 filers actually paid income taxes amounting to 16.9 percent of their reported worldwide income. All of the effective tax rates based on book income for profitable filers are lower than the equivalent measures computed for all Schedule M-3 filers, shown on the right side of figure 4, because the inclusion of losses reduces the aggregate income for all Schedule M-3 filers. This difference was particularly large for tax year 2009 because the aggregate losses of unprofitable filers were considerably larger in that year than in 2010. Aggregate book losses were even larger for tax year 2008; however, because these losses more than offset the income of profitable corporations, resulting in an overall net loss, we could not compute meaningful average effective tax rates based on book income for all corporations for that year. With access to only aggregated data, we were not able to provide any information on the distribution of effective rates across individual filers; however, past work we have done suggest that there could be significant variation in effective rates across taxpayers.effective tax rates for different types of corporations, such as U.S. controlled corporations and foreign controlled corporations. Past empirical studies comparing average effective tax rates across countries have focused on worldwide taxes (which add foreign and state and local income taxes to federal income taxes in the numerator). Our estimates for these worldwide rates ranged between 2 to 6 percent higher than the U.S. federal rates we present above, but the relationships between the different measures (total, current, and actual) within each year remained similar. (See fig. 5.) It is difficult to make close comparisons between our results and estimates from prior studies based on financial statement data because most of the latter estimates are averaged over multiple years for which we have no data. (See fig. 9 in app. I.) Our estimated rates for the full population of filers for tax year 2010 are generally lower than the estimates presented in earlier studies while our estimated rates for other years are generally higher. As noted above, it can be difficult to compare financial statements with tax returns because entities included under each type of reporting can differ. IRS developed Schedule M-3 Part I to help delineate book-tax differences related to consolidation and to standardize the definition of the financial, or book, income of the tax consolidated group. As shown in figure 6, for tax year 2010 Schedule M-3 filers reported that they earned a total of $1.3 trillion from U.S. and foreign entities that were included in their consolidated financial statement but not in their consolidated tax returns (which, therefore, had to be subtracted out on the Schedule M-3). They also reported $420 billion in losses from such entities. (These losses also had to be subtracted out, meaning that net income increased by $420 billion.) In contrast, they reported less than $10 billion in either income or losses from entities that are included in their tax returns but not in their financial statements. These corporations also reported $762 billion in positive adjustments and $20 billion in negative adjustments relating to transactions between excluded and included entities. The corporations must also report several other types of adjustments, such as for any difference between the time period covered by their financial statements and the period covered by their tax years, that they make in order to arrive at a final amount that represents the net book income or loss of all of their entities that are included in their tax returns. For tax year 2010, this population of Schedule M-3 filers reported a total of $1.1 trillion in net book income for entities included in their tax returns and a total of $300 billion in losses for such entities. Schedule M-3 Parts II and III report book-tax differences related to income and expenses, respectively, for the tax consolidated group only. The largest category of differences for both income and expense items was “other.” IRS officials told us that their reviews of the detailed documentation that filers are required to submit along with their Schedules M-3 indicate three broad subtypes of reporting in these other categories: 1. Some common income and expense categories have no line of their own on the M-3, so they have to be reported as other. This was the case for research and development expenses prior to 2010; those expenses now have their own line. 2. Taxpayers report miscellaneous items in these categories but do not provide details on what they include. 3. Taxpayers record items in these categories that clearly should have been reported on more specific lines of the M-3. The officials suggested some taxpayers do this because they do not take the time or trouble to fill out the form properly; others may be trying to hide details from the IRS. As a consequence, there is over-reporting in the two “other” categories and under-reporting in some of the more specific categories. Figures 7 and 8 identify the 10 largest categories of book-tax differences for both income and expense items in tax year 2010. Book-tax differences caused by the inclusion of an income or expense item by one accounting system but not the other are known as permanent differences. One of the largest permanent book-tax income differences reported in tax year 2010 arose from the section 78 gross-up, as shown in figure 7. Section 78 of the IRC requires U.S. corporations electing to claim the foreign tax credit to gross-up (i.e., increase) their dividend income by the amount of creditable foreign income taxes associated with Given that corporations are not required to the dividends they received. make this type of adjustment for book income purposes, the amount of any gross-up is a permanent positive difference between tax income and book income. 26 U.S.C. § 78. Section 902 of the IRC permits a U.S. corporation that owns at least 10 percent of the voting stock of a foreign corporation to take an indirect credit for foreign income taxes associated with dividends that it receives from that foreign corporation. 26 U.S.C. § 902. particular category. Similarly, the negative differences represent the sums across all filers with net negative differences. The magnitudes of some book-tax differences varied significantly between 2006 and 2010.For example, the excess of tax depreciation over book depreciation increased from about $69 billion in 2006 to over $145 billion in 2010. As another example, the excess of tax income over book income relating to the section 78 gross-up increased from about $36 billion in 2006 to over $77 billion in 2010. As the details presented in figures 7 and 8 indicate, the direction of the book-tax differences in all of the income and expense categories varies across corporations. The book amount is greater for some corporations, while the tax amount is greater for others. As a consequence, the aggregate net difference in many categories (shown in tables 1 and 2 in app. III) are significantly smaller than the absolute value of the differences. Moreover, the net difference is positive for some categories and negative for others. The offsetting of negative and positive differences across categories and across corporations within categories means that the relatively small difference between aggregate net book income ($833 billion) and aggregate net tax income ($737 billion) for the population of Schedule M-3 filers for tax year 2010 may hide considerable differences between book and tax income and between effective tax rates based on book income and those based on tax income for individual corporations. Given the aggregate nature of our data, we were not able to examine the range of potential differences across corporations. We provided a draft of this report to IRS on April 25, 2013, for review and comment. After reviewing the draft report, IRS provided technical comments which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to interested congressional committees, the Secretary of the Treasury, the Commissioner of Internal Revenue, and other interested parties. In addition, the report also will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9110 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. Results from past studies, presented in Figure 9, use financial statement data to estimate average effective tax rates for U.S. corporations, employed pretax worldwide book income as the denominator of their effective rate, and covered at least one tax year since 2001. As indicated in the figure, these studies used a variety of measures of worldwide taxes for their numerator. Five of the estimates were based on data that excluded all corporations with negative book income. Most of the studies reported their results as averages across multiple years. Other recent studies used aggregate measures of tax receipts received by the U.S. Treasury and profits before taxes from the Bureau of Economic Analysis’s (BEA) National Income and Product Account (NIPA) data to estimate average corporate effective tax rates. The BEA profits measure is created using an aggregate income amount using tax data adjusted by two components: inventory valuation adjustment and capital consumption adjustment. Due to the aggregate nature of the profits before taxes, the denominator includes corporations with positive and negative profits before taxes. Another recent study by Citizens for Tax Justice and the Institute on Taxation and Economic Policy used financial statement data but focused on the effective rate of the federal income tax on U.S. domestic income, rather than worldwide taxes on worldwide income. They estimated a three-year (2008 to 2010) average effective tax rate of 18.5 percent for a sample of 280 of the largest U.S. corporations. Appendix II: Copy of IRS Form 1120, Schedule M-3 (Tax Year 2010) Appendix IV: Descriptions of Income and Expense Items with the Largest Book-Tax Differences Description Differences in this category and the one below relating to Form 4797 arise from differences in book and tax reporting of gains or losses arising from the sale or other disposition of business assets. One such difference arises when accumulated tax depreciation for an asset is higher than accumulated book depreciation, which would make the gain upon sale higher for tax purposes than book purposes. Certain qualified interest income, such as that from municipal bonds, is exempt for tax purposes but must be reported as income in financial statements. Also some items may be treated as interest income for tax purposes but as some other form of income for financial accounting purposes. Cost of goods sold comprises numerous items, some of which have their own lines on the M-3, like depreciation and stock options expense, and other which do not. Among the differences reported on this line are those relating to differences in inventory accounting. This line includes any difference between the amount of foreign dividends that corporations report on their tax returns and the amounts they report in their financial statements, unless those dividends have already been taxed by the United States. This line relates to differences between the book and tax treatment of any interest owned by the filer or a member of the U.S. consolidated tax group that is treated as an investment in a partnership for U.S. income tax purposes (other than an interest in a disregarded entity). This line relates to differences between the treatment of income and losses from equity investments under financial statement rules and tax accounting rules . See description relating to income statement disposition of assets. Section 902 of the Internal Revenue Code (IRC) permits a U.S. corporation that owns at least 10 percent of the voting stock of a foreign corporation to take an indirect credit for foreign income taxes associated with dividends that it receives from that foreign corporation. Section 78 of the IRC requires U.S. corporations electing to claim the foreign tax credit to gross up (i.e., increase) their dividend income by the amount of creditable foreign income taxes associated with the dividends they received. This line covers differences in the book and tax reporting of capital gains, other than those arising from partnerships and other pass-through entities. One reason for differences on this line is that certain amounts that are treated as tax deductions for tax purposes are treated as some other form of expense for financial accounting purposes, or vice versa. For tax purposes a firm depreciates its assets using the modified accelerated cost recovery system method, which allows the write-off of an asset at a much faster rate than straight-line depreciation, the most commonly used method for financial accounting purposes. This category covers differences in book and tax amortization rules for items other than those relating to goodwill or acquisition, reorganization and start-up costs. Under Generally Accepted Accounting Principles, firms are required to estimate the proportion of sales that will ultimately become uncollectible and expense this amount in the same period as the recognition of the sale in revenue. In contrast, for tax purposes firms must wait until a specific receivable is known to be uncollectible before it can be deducted. Description Prior to 2002, goodwill was amortized over a maximum of 40 years for book purposes; after 2001 financial accounting changed to the impairment method, whereby goodwill is only written down if it is judged by management and auditors to be impaired. For tax purposes, goodwill was not deductible prior to 1994; since 1994 goodwill must be amortized over 15 years. These differences between book and tax treatments can be either temporary or permanent. This line covers all expenses attributable to any pension plans, profit-sharing plans, or any other retirement plans. A stock option expense generally is recorded in a financial statement as the estimated fair value of the option over the period of time that the stock option vests. The exercise of the stock option does not affect the corporation’s net book income. In contrast, the IRC recognizes two types of stock options—qualified and nonqualified stock options. Firms cannot take deductions for qualified stock options (unless the stock is held for less than 2 years), although recipients get special beneficial tax treatment. For nonqualified stock options, the firm granting the option can deduct the fair market value when the recipient has an unrestricted right to the property and the fair market value can be reasonably ascertained. This line shows the difference between the amounts of foreign income taxes that corporations report as expenses in their financial statements and the amounts that they claim as deductions for tax purposes. U.S. corporations typically claim foreign tax credits, rather than deductions, for most of the foreign income taxes they pay. Consequently, the book tax expenses typically far exceed the tax deductions. Examples of the types of compensation that taxpayers are required to report on this line are payments attributable to employee stock purchase plans, phantom stock options, phantom stock units, stock warrants, stock appreciation rights, and restricted stock, regardless of whether such payments are made to employees or non-employees, or as payment for property or compensation for services. In addition to the contact named above, James Wozny (Assistant Director), Elizabeth Fan, Robert MacKay, Donna Miller, Karen O’Conor, Max Sawicky, and Andrew J. Stephens made key contributions to this report. | Proponents of lowering the U.S. corporate income tax rate commonly point to evidence that the U.S. statutory corporate tax rate of 35 percent, as well as its average effective tax rate, which equals the amount of income tax corporations pay divided by their pretax income, are high relative to other countries. However, GAO's 2008 report on corporate tax liabilities ( GAO-08-957 ) found that nearly 55 percent of all large U.S.-controlled corporations reported no federal tax liability in at least one year between 1998 and 2005. Given the difficult budget choices Congress faces and its need to know corporations' share of the overall tax burden, GAO was asked to assess the extent to which corporations are paying U.S. corporate income tax. In this report, among other things, GAO (1) defines average corporate ETR and describes the common methods and data used to estimate this rate and (2) estimates average ETRs based on financial statement reporting and tax reporting. To conduct this work, GAO reviewed economic and accounting literature, analyzed income and expense data that large corporations report on the Schedules M-3 that they file with Internal Revenue Service (IRS), and interviewed IRS officials. Effective tax rates (ETR) differ from statutory tax rates in that they attempt to measure taxes paid as a proportion of economic income, while statutory rates indicate the amount of tax liability (before any credits) relative to taxable income, which is defined by tax law and reflects tax benefits and subsidies built into the law. Lacking access to detailed data from tax returns, most researchers have estimated ETRs based on data from financial statements. A common measure of tax liability used in past estimates has been the current tax expense--either federal only or worldwide (which comprises federal, foreign, and U.S. state and local income taxes). The most common measure of income for these estimates has been some variant of pretax net book income. GAO was able to compare book tax expenses to tax liabilities actually reported on corporate income tax returns. For tax year 2010 (the most recent information available), profitable U.S. corporations that filed a Schedule M-3 paid U.S. federal income taxes amounting to about 13 percent of the pretax worldwide income that they reported in their financial statements (for those entities included in their tax returns). When foreign and state and local income taxes are included, the ETR for profitable filers increases to around 17 percent. The inclusion of unprofitable firms, which pay little if any tax, also raises the ETRs because the losses of unprofitable corporations greatly reduce the denominator of the measures. Even with the inclusion of unprofitable filers, which increased the average worldwide ETR to 22.7 percent, all of the ETRs were well below the top statutory tax rate of 35 percent. GAO could only estimate average ETRs with the data available and could not determine the variation in rates across corporations. The limited available data from Schedules M-3, along with prior GAO work relating to corporate taxpayers, suggest that ETRs are likely to vary considerably across corporations. GAO does not make recommendations in this report. GAO provided a draft of this report to IRS for review and comment. IRS provided technical comments which were incorporated as appropriate. |
The Cayman Islands is a United Kingdom Overseas Territory located in the Caribbean Sea south of Cuba and northwest of Jamaica, with a total land area approximately 1.5 times the size of Washington, D.C., and a population of 47,862, as seen in figure 1. While geographically small, the Cayman Islands is a major offshore financial center (OFC) with no direct taxes that attracts a high volume of U.S.-related financial activity, often involving institutions rather than individuals. According to Treasury, U.S. investors held approximately $376 billion in Cayman-issued securities at the end of 2006, making it the fifth largest destination for U.S. investment in foreign securities. Although not easily defined, OFCs are generally described as jurisdictions that have a high level of nonresident financial activity, and may have characteristics including low or no taxes, light and flexible regulation, and a high level of client confidentiality. As a major international financial center, the Cayman Islands attracts a high volume of financial activity in sectors related to banking, hedge-fund formation and investment, structured finance and securitization, captive insurance, and general corporate activities. The Cayman Islands is a major international banking center, with nearly $2 trillion in banking assets as of December 2007, according to the Cayman Islands Monetary Authority (CIMA), the jurisdiction’s financial regulatory agency. CIMA reports that as of March 2008, 277 banks were licensed to operate on the island, of which 27 percent were based in the United States. CIMA also reported that 97 percent of the $2 trillion held by these banks as of December 2007 was from institutions rather than individual investors. Treasury statistics indicate that, as of September 2007, U.S. banking liabilities to the Cayman Islands were the highest of any foreign jurisdiction at nearly $1.5 trillion, and as of June 2007, banking claims on the Cayman Islands were the second highest (behind the United Kingdom), at $940 billion. The Cayman Islands is also a major domicile for hedge funds. According to CIMA, 9,018 mutual funds were registered in the Cayman Islands in the registered funds category as of the first quarter 2008, the vast majority of which were hedge funds. Although there is no statutory or universally accepted definition of hedge funds, the term is commonly used to describe pooled investment vehicles that are privately organized and administered by professional managers and that often engage in active trading of various types of securities and commodity futures and options contracts. While there is no universally accepted definition of a hedge fund, private- industry sources cited by the Joint Committee on Taxation estimate that there were approximately $1.5 trillion in assets managed by hedge funds worldwide as of the end of 2006, and approximately 35 percent of funds were organized in the Cayman Islands. Funds organized in the Cayman Islands may be managed in the United States. According to the same source, the United States was by far the leading location for hedge-fund managers, who managed an estimated 65 percent of hedge-fund assets in 2006. In addition to being a prominent domicile for hedge funds, the Cayman Islands also carries out a high volume of structured finance activity. While structured finance can encompass a number of financing strategies, it often involves securitization, the process of pooling similar types of financial assets, such as current or future cash flows from loans, and transforming them into bonds or other debt securities. Securitization involves isolating a group of assets to serve as the basis of financing that is intended to be legally remote from the bankruptcy risks of the former owner, and is generally designed to move those assets off of the owner’s balance sheets. In the Cayman Islands, asset-backed securitization has been used widely to turn self-liquidating assets, such as receivables from mortgages, into debt securities that can be offered and sold on capital markets. Treasury data show that as of the end of 2006, U.S. investors held more asset-backed securities issued by the Cayman Islands, at about $119 billion, than asset-backed securities issued by any other foreign jurisdiction. The Cayman Islands is also a major domicile for the captive insurance industry. In its basic form, captive insurance is a method by which companies can self-insure against various types of risk rather than purchasing insurance from an insurance company. In a traditional arrangement, a parent company will establish a subsidiary to act as a captive insurer. Other types of captive insurance arrangements exist as well, such as those in which a single captive insures, and is owned by, multiple companies. According to CIMA, the Cayman Islands was home to 760 licensed captive insurance companies as of April 2008, with nearly $34 billion in total assets and $7.6 billion in premiums. Ninety percent of these companies insured risks in North America. Slightly over a third were related to healthcare. Lastly, a wide range of corporate-related activities are carried out in the Cayman Islands. According to the Cayman Islands Registry of Companies, over 80,000 companies were registered in the Cayman Islands as of May 2008. Many of the 18,857 entities registered at Ugland House are U.S.-connected. These entities most frequently involve investment funds and structured finance vehicles. Ugland House,shown in figure 2, is located at 301 South Church Street, George Town, Grand Cayman, Cayman Islands. It houses the international law firm of Maples and Calder; Maples Corporate Services Limited, a licensed trust company owned by Maples and Calder which provides registered office services to clients of Maples and Calder; and Maples Finance Limited, a licensed trust company and mutual fund administrator owned by Maples and Calder which provides fiduciary and fund administration services. Maples business is to facilitate Cayman Islands- based international financial and commercial activity for a clientele of primarily international financial institutions, institutional investors, and corporations. Maples is the only occupant of Ugland House. Maples provides registered office services to companies, using the Ugland House address. A registered office is required by Cayman Islands law for corporations registered in the Cayman Islands. States in the United States have similar statutory requirements. Registered office services include activities such as accepting any service of process or notices, maintenance of certain entity records, and filing of statutory forms, resolutions, notices, returns, or fees. As is the case with many U.S. states’ laws, Cayman Islands law does not require or presume that any other business activity of the corporation occurs at the registered office. Cayman Islands law requires company service providers that establish entities and provide registered office services to adhere to specific Anti- Money Laundering (AML) and Know-Your-Customer (KYC) requirements. For example, as a company service provider, Maples must verify and keep records on the beneficial owners of entities to which they provide services, the purpose of the entities, and the sources of the funds involved. If suspicion arises in relation to any of these types of inquiries, the company service provider is required to make a suspicious activity report (SAR) to the Cayman Islands Financial Reporting Authority (CAYFIN). Cayman Islands law allows for nominee shareholders and the provision of officers and directors. The use of nominees, though, does not relieve the company service provider from its obligation under Cayman Islands law to know the beneficial owner under AML-KYC rules. In contrast, state laws which govern the creation of corporations in the United States generally do not require company formation agents to collect ownership information on the entities they register. The Cayman Islands has taken steps to restrict the use of bearer shares to obscure ownership or control of an entity. Use of bearer shares in the Cayman Islands is restricted to cases where they are immobilized through deposit with an authorized or recognized custodian who must keep a register of owners and perform the required beneficial ownership verification. According to the Cayman Islands Registrar, as of March 6, 2008, 18,857 active entities used Ugland House as a registered office, and based on the nature of these entities very few have a significant physical presence in the Cayman Islands. As displayed in figure 3, approximately 96 percent of Ugland House entities are exempt companies, exempt limited partnerships, and exempt trusts. Exempted companies are prohibited from trading in the Cayman Islands with any person, firm, or other corporation except in furtherance of their business that is carried on outside the Cayman Islands. Exempted limited partnerships exist under the same criteria and must have at least one general partner that is resident or incorporated in the Cayman Islands. Requirements for exempt trusts are that they must register with the Cayman Islands Registrar and have no beneficiary that is domiciled in or resident of the Cayman Islands. A Maples and Calder partner indicated that some exempted companies occasionally maintain minimal sales or marketing staff in the Cayman Islands to facilitate business conducted elsewhere, but most have no staff or facilities in the Cayman Islands and none, except for Maples group companies, is run out of Ugland House. According to Cayman Islands government officials, the domestic trading prohibition on exempted companies and exempted limited partnerships, is intended to protect the small domestic market from being flooded by outside competitors. Thus, exempted entities that wish to trade in the local market must receive a special license to do so under the Local Companies (Control) Law. According to Cayman Islands Companies Law, nonresident companies are a category of entity similar to an exempted entity in that neither can conduct business in the Cayman Islands. Foreign companies are organized under the laws of a jurisdiction other than the Cayman Islands, but have chosen to register with the Cayman Islands Registrar to conduct business in the Cayman Islands, such as to become a general partner in a Cayman Islands exempted limited partnership. Finally, less than 1 percent of Ugland House entities are “resident” companies that are registered to conduct their business in the Cayman Islands. According to a Maples and Calder partner, the persons establishing entities at Ugland House are typically referred to Maples by counsel from outside the Cayman Islands, fund managers, and investment banks. A Maples and Calder partner also said that the make-up of entities in Ugland House was reflective of the nature of their business and largely international, institutional client base, and was not necessarily representative of the types of entities registered with other company service providers in the Cayman Islands. According to Maples and Calder partners, their business primarily involves two areas: investment funds and structured finance. Specifically, they estimated that approximately 38 percent of the Cayman Islands companies and limited partnerships that have a registered office at Ugland House are formed to act as various types of hedge funds or private-equity funds (together referred to as “investment funds”), and generally involve institutional and high net-worth investors. Approximately 24 percent of entities formed related to structured finance/capital markets and project finance business, such as securitization or aircraft finance, and 38 percent are of a “general corporate” nature. The general corporate business was described as being a “catch-all” category that may involve some overlap with the other two areas of entity formation. Maples and Calder partners explained that their general corporate business involves entities such as trading companies, joint ventures, holding companies, wholly owned subsidiaries, and captive insurance companies. To obtain a more detailed understanding of Maples business, we reviewed a total of 133 instances of new business instructions that could have led to the formation of a Cayman Islands entity. These contacts occurred over a period of 2 separate weeks in December 2007 and March 2008. We found that approximately 74 percent of all instructions involved investment-fund- related business. Approximately 17 percent of the instructions involved general corporate business, and approximately 11 percent involved structured finance business. While this business distribution is somewhat different than what Maples and Calder partners estimated, the activity undertaken in these 2 weeks may not be representative of Maples’ registered office business as a whole. Maples and Calder partners commented that activity in the weeks that we reviewed may reflect the recent decline in structured finance work caused by the “credit crunch.” Maples and Calder partners estimated that 5 percent of the overall number of Ugland House entities are wholly owned by U.S. persons. The partners also said that fewer than 50 percent, likely in the 40 to 50 percent range, of all Ugland House entities are U.S.-related in that their billing address is in the United States. This distribution of relationships is due to the nature of the entities registered in Ugland House. Other than for those entities which are wholly owned or controlled, the concepts of ownership and control are complex for most of the entities registered in Ugland House. According to the partners, because a significant amount of Maples’ registered entities are related to structured finance or investment fund transactions, direct ownership or control by a U.S. person is only representative of a small number of entities registered at Ugland House. For example, structured finance entities are not typically carried on a company’s balance sheet, and ownership can be through a party other than the person directing the establishment of the entity, such as a charitable trust, or spread across many noteholders or investors in deals involving securitization. U.S. persons’ involvement with structured finance entities is therefore of a different nature, and may include arranging or participating in deals without clear U.S. ownership or control. Similarly, while investment fund entities are often established, controlled, and managed at the direction of investment managers, such entities are generally established as partnerships and are essentially owned by the fund’s investors. In addition, one investment fund or structured finance transaction can involve more than a dozen separate legal entities, thereby increasing the number and complexity of relationships involved. For those instances for which Maples and Calder has a U.S. billing address for an Ugland House entity, U.S. involvement often takes the form of providing services to Cayman Islands entities, as opposed to wholly owning or controlling the entity. For example, the partners explained that many of the recipients of invoices include U.S. investment banks, paying agents, securities trustees, law firms, placement agents, and administrators for private-equity funds and hedge funds. The partners gave as an example of a tenuous connection a situation where a U.S. bank was the billing address for an Ugland House registered entity established for a Brazilian company to raise funds within Brazil for a Brazilian project. New business instructions received by Maples that we reviewed provided additional detail regarding the type and role of U.S. persons involved. Among these instructions, approximately 60 percent involved U.S. persons, mostly through managerial, promoter, or advisory roles. Four percent involved U.S. subsidiaries or holding companies. U.S. investment firms were involved in approximately 44 percent of the transactions we reviewed, generally in the role of investment advisor, manager, or promoter. U.S. companies and banks were the second most common type of U.S. persons involved, with U.S. banks frequently directing the establishment of investment-related entities. U.S. persons were participants in a joint venture or were partners in a transaction in approximately 5 percent of the instructions. Maples and Calder partners said that major onshore commercial law firms or in-house legal counsel instruct Maples to form the entities, although we could not verify this in the new business instructions that we reviewed. The partners also said that onshore lawyers advise their clients on all onshore legal, regulatory, and tax issues for their home jurisdictions. The Cayman Islands is a major domicile for global hedge funds. Maples investment funds business is largely hedge-fund related, and also includes private-equity funds. Maples said that their investment fund clients are predominantly large investment banks or investment management firms, or the funds arranged by such firms for institutional and high-net-worth investors. Documentation provided by Maples indicated that persons establishing and investing in investment funds included investment banks, pension funds, insurance companies, and university endowments. According to Maples and Calder partners, Cayman Islands funds are used to facilitate significant investment in the United States by non-U.S. investors. They said that one reason that many non-U.S. investors prefer not to invest directly into the United States is because of perceived litigation risk, and that the ability of U.S. fund managers to manage Cayman Islands funds, therefore, helps U.S. fund managers compete globally. An understanding of the structure and function of hedge funds and private- equity funds provides additional insight into the nature of the entities registered at Ugland House. Hedge funds are private investment funds that are actively traded by a fund manager. Hedge funds are “open ended,” in that investors are generally allowed to invest additional money or redeem shares at designated dates. Maples explained that hedge funds often are composed of a “master-feeder” structure wherein “feeder” fund entities are established that receive subscriptions from different investor groups and invest in a “master fund” entity. The master fund entity is established for holding assets and making investment instructions. In this way, economies of scale can be maximized while allowing for simplified trading and reconciliation of portfolios of the assets invested. According to Maples, when U.S. investors invest in offshore funds in the Cayman Islands, they typically prefer doing so through a “feeder” entity that is formed in a U.S. state such as Delaware. Figure 4 displays a common “master-feeder” hedge-fund structure. As figure 4 depicts, the fund is managed and administered, and fund managers can be U.S. persons. Also, Maples and Calder partners stated that U.S. and non-U.S. brokers/custodians offer services such as centralized securities and trade execution for the fund. The other type of investment entities registered at Ugland House are private-equity funds. In contrast to hedge funds, private-equity funds are generally private funds involving long-term, “closed” investments that do not involve an actively traded portfolio of stocks. Private-equity funds typically make 7- to10-year concentrated investments in a company and often seek to create value by providing management support or consulting services to the portfolio companies. According to officials from OPIC, one-third to half of private-equity funds in which it has invested have been organized in the Cayman Islands. According to Maples and Calder, private- equity funds are usually formed as limited partnerships rather than as corporations. Structured finance entities are companies that are formed for a specific and, in some cases, finite purpose. Commonly referred to as Special Purpose Entities (SPEs) or Special Purpose Vehicles (SPVs), these companies can be used in many different types of business transactions. Maples and Calder partners told us that structured finance entities using Ugland House as a registered office are largely related to transactions such as securitization, aircraft finance, and other deals involving isolating risk and raising capital. In the case of SPVs, these transactions generally involve an SPV holding assets of some type, with the SPV being isolated from the bankruptcy risks of the former owner of the assets—typically the “sponsor” of the SPV. Because of this feature of SPVs, they are not generally represented on the sponsor’s balance sheet. According to a 2007 CFATF evaluation, interest in SPVs in the Cayman Islands has increased in the 2 years prior to the reports issuance. Maples and Calder partners stated that their clients for these types of entities are often large investment banks and institutions, including many well-known multinational companies. Maples and Calder partners reported that part of their structured finance business involves Structured Investment Vehicles (SIVs), which are SPVs that use structured investments to make a profit from the difference between short-term borrowing and longer-term returns. Unlike some SPVs, SIVs can be established to continue their operations for an indefinite period. SIVs often invest in structured finance products such as asset- backed securities, which include bonds backed by auto loans, student loans, credit card receivables, and mortgage-backed securities. These structures are also used to facilitate major capital inflows from foreign investors into the United States, according to Maples. SIV use in the Cayman Islands originated as the use of structured finance techniques evolved in financial markets, with the first Cayman SIV launched in 1988. These financial instruments received heightened interest following the financial market crisis in 2007 after problems surfaced related to bank- sponsored SIVs. As shown in figure 5, SIVs are sponsored by an institution, such as a bank, and an investment manager is appointed to provide investment advice together with funding and operational support. In addition, the SIV can be underwritten and arranged by an investment bank. As figure 5 depicts, the SIV sponsor, investment manager and underwriter/arranger can be U.S. persons. The SIV sells notes to investors through a clearinghouse, and investors are paid interest through a trustee and paying agent. Finally, a swap counterparty can enable additional investors to participate in the SIV in a different currency and interest rate than the underlying asset being financed. Figure 5 shows that SIV investors, trustee and paying agents, and swap counterparties can also be U.S. persons. A second type of Maples SPV activity includes transactions involving asset transfer, such as aircraft leasing deals. Maples and Calder partners explained that aircraft financing deals using Ugland House registered structured finance vehicles have involved Boeing, a U.S. airplane manufacturer, as well as a non-U.S. aircraft manufacturer. As shown in figure 6, these deals involve the creation of an SPV whose shares are owned by a Cayman Islands charitable trust, and managed by a company service provider such as Maples Finance Limited. Aircraft involved in the deal are sold by the aircraft manufacturer to the SPV, which then leases the aircraft to the party that will operate the aircraft, such as a government or private entity from another country. The whole transaction is arranged by a third-party financial institution that backs the deal. Over time, the operator of the aircraft makes payments to the SPV while using the aircraft, and within approximately 5-8 years the aircraft are effectively paid for and the titles are transferred from the SPV to the aircraft operator. This structure reduces the credit risk involved and enhances the ability of financiers to repossess the aircraft if default occurs. Maples and Calder partners said that the Ex-Im Bank had facilitated aircraft sales involving SPVs registered at the Ugland House address. Ex- Im officials confirmed that it has been involved in supporting 42 aircraft financing deals involving the Cayman Islands since 2003, with 24 entities involving Maples as counsel. Ex-Im Bank officials reported that one nonaircraft deal had been conducted involving the Cayman Islands, and that Maples served as counsel to the borrower in that deal. They said that since 2006 there has been less frequent use of Cayman Islands entities in U.S. aircraft financing deals since the United States ratified the Cape Town Treaty in 2006. That treaty established common international protocols and standards for cross-border aircraft financing and leasing. The United Kingdom has not signed this agreement, and as a United Kingdom overseas territory, the Cayman Islands therefore is not party to the agreement. Ex- Im Bank officials said that many structured finance deals involving the lease of U.S. aircraft now utilize other jurisdictions governed by the treaty, such as Delaware. In addition to investment funds and structured finance entities, Maples provides registered office services to general corporate entities such as corporate subsidiaries and holding companies. Maples also establishes trusts, and a portion of those choose to be registered. Maples and Calder partners reported that a limited number of their general corporate entities are wholly owned subsidiaries of multinational corporations. Examples of this type of entity with a U.S. connection identified from Maples’ new business instructions that we reviewed include: Formation of a company to be a subsidiary of a U.S. company to provide film production services for a film being shot in Romania. Formation of a company by a U.S.-based company for the purposes of providing information technology services in Asia. According to Maples and Calder partners, Cayman Islands holding companies often have been used by businesses in emerging market countries to conduct initial public offerings of shares listed in the United States or Europe. Captive insurance companies are also contained within this general corporate category of Maples’ business, although the number of captive insurance entities registered at Ugland House is relatively low due to the Cayman Islands requirement for captive insurance companies to have a licensed insurance manager located within the Cayman Islands. For this reason, captive insurance companies in the Cayman Islands frequently use the insurance manager’s location as their registered office address. A portion of Maples general corporate business involves the establishment of holding companies. Examples of this type of entity with a U.S. connection that we identified from new business instructions that we reviewed include: Formation of an intermediate holding company for a company listed on the New York Stock Exchange with operations in 30 countries. Formation of an investment holding company for the Hong Kong arm of a Wall Street bank. Formation of two investment holding companies for real estate investments in Eastern Europe to be owned by a private-equity fund managed by a U.S. private-equity fund manager. Maples and Calder partners said that the formation of holding companies typically involves intermediate limited liability holding companies formed by multinational corporations to isolate risk related to their foreign assets. They said that the formation of personal holding companies was increasingly rare. They also indicated that the holding companies that they typically establish involve the company existing at the bottom of a family of corporate structures to hold specific assets, rather than at the top of the pyramid of the corporate family. As the example cases above describe, some holding companies established by Maples are associated with private-equity funds. Lastly, Maples establishes trusts for clients, some of whom choose to be registered as exempted trusts under Cayman Islands law. Exempted trusts afford official confirmation in the form of a certificate that the trust will remain exempt from any potential future direct taxes that may be imposed by the Cayman Islands for a specified period of time of up to 50 years. Such certificates are regarded in the market as reflecting the stable status quo as well as providing an additional level of commercial certainty. A senior Maples and Calder partner said that the clients for their trust business are invariably institutional trustees rather than the settlers of trusts, and mainly consist of banks (U.S. and non-U.S.) serving as trustees for non-U.S. taxpayers in private wealth trusts. He stated that a portion of Maples trust business involves private wealth management, and that wealthy individuals in Central and South America and the Middle East establish trusts in other nations such as the Cayman Islands to manage their wealth primarily because their home jurisdictions have no structure equivalent to a trust due to their not having a common law tradition. According to Maples and Calder partners, being able to offer Cayman Islands trusts enables major U.S. banks to compete with other major foreign banks for private wealth management and lending business. Because the United States has trusts, U.S. persons rarely seek to establish trusts in the Cayman Islands, according to Maples and Calder partners. Maples and Calder partners also noted that U.S. states such as Delaware tend to service the domestic U.S. trust business. They said that, in addition to private wealth trusts, commercial trusts are sometimes established for Japanese clients as well. U.S. persons who engage in Cayman-based financial activity commonly do so to gain business advantages, including tax advantages under U.S. law. Although such activity is typically legal, some persons have engaged in activity in the Cayman Islands, like other jurisdictions, in an attempt to avoid detection and prosecution of illegal activity by U.S. authorities. While the Cayman Islands is one of a number of OFCs that attract substantial financial activity from the United States due to tax and other benefits, the Cayman Islands offers a combination of additional factors that may draw U.S. activity. In particular, the Cayman Islands is generally regarded as having a stable and internationally compliant legal and regulatory system, a business-friendly regulatory environment, and a reputation as a prominent international financial center. First, because the Cayman Islands’ legal and regulatory system is generally regarded as stable and compliant with international standards, U.S. persons looking for a safe jurisdiction in which to place funds and assets may choose to carry out financial transactions there. In particular, Cayman Islands law is based on English common law, which is familiar in the United States due to similarities between British and U.S. legal systems. The Cayman Islands regulatory regime has also been deemed by the International Monetary Fund to be well-developed and in compliance with a wide range of international standards. Pursuant to a 2007 on-site evaluation, the Caribbean Financial Action Task Force (CFATF) also cited the Cayman Islands as having a strong compliance culture related to anti- money laundering and terrorist-financing activities. IRS officials cited the Cayman Islands’ reputation for regulatory sophistication as a potential factor in attracting legal financial activity from the United States. U.S. persons may also be drawn to the Cayman Islands because of its business-friendly regulatory environment. Establishing a Cayman Islands entity can be relatively inexpensive. For instance, an exempted company can be created for less than $600 U.S., not taking into account service- providers’ fees, and it is not required to maintain its register of shareholders in the Cayman Islands or hold an annual shareholder meeting. Additionally, Cayman government officials noted that the jurisdiction has a public-private sector cooperative approach to regulation and attempts to be responsive to the needs of market participants. For instance, Cayman law requires CIMA to consult with the private sector prior to issuing or amending rules. The jurisdiction’s responsiveness to market needs led it to adopt the Segregated Portfolio Company (SPC), a type of entity that opened up the captive insurance industry to smaller companies unable to meet minimum reserve levels on their own, but capable of doing so in groups. The Cayman Islands may also attract U.S.- related captive insurance companies because it has lower capital requirements than some U.S. states. Additionally, as reported by Maples and Calder attorneys and U.S. officials, some persons may be attracted to the Cayman Islands to take advantage of specific legal protections for creditors and investors. According to Maples and Calder attorneys, if a Cayman Islands fund or other entity becomes insolvent, Cayman law is generally focused on protecting the interests of creditors and investors. For example, according to Maples and Calder, Cayman law differs from U.S. bankruptcy law in that it provides no moratoria on secured-creditor action against a debtor company. Officials from OPIC report that, as an investor, it is important to OPIC that private-equity funds it invests in be organized in a jurisdiction with strong legal protections for creditors, such as the Cayman Islands. According to them, nearly half of the funds with which OPIC has been involved were organized in the Cayman Islands. Similarly, officials from the Ex-Im Bank stated that Cayman Islands law gives them confidence that they will have less difficulty reclaiming assets if a party in an Ex-Im- backed transaction defaults. The Cayman Islands may also be a jurisdiction of choice among U.S. persons due to factors related to its location and reputation for prominence as an international financial center. The Cayman Islands is proximate to the United States, operates in the same time zone as New York and the eastern United States and is English speaking, all factors that may contribute to U.S. persons’ choices to conduct activity there. It has a robust financial services sector, which includes several major law firms and other locally based service providers, as well as prominent international accounting and audit firms, fund administrators, and banking institutions. The high volume of existing Cayman-based financial activity may also be responsible for drawing additional business. For instance, relationships between U.S. and Cayman law firms and other service providers may result in referrals of additional business. Finally, U.S. persons may carry out activity in the Cayman Islands because of its reputation as a neutral jurisdiction for structuring deals with foreign partners. Ex-Im Bank officials explained that they frequently created Cayman Islands entities to facilitate the purchase of U.S. aircraft, and these deals often involve foreign entities who may prefer not to carry out business in the United States for tax, regulatory, or political reasons. Additionally, OPIC officials stated that foreign investors in private-equity funds that they are involved with value the Cayman Islands’ reputation for legal neutrality towards investors from different jurisdictions. Some U.S. persons engaging in financial activity in the Cayman Islands are able to legally minimize their U.S. tax obligations. For instance, some U.S. persons can minimize their U.S. tax obligations by using Cayman Islands entities to defer U.S. taxes on foreign income. In general, the United States taxes U.S. persons, including corporations, on their worldwide income, but only taxes foreign corporations on their U.S. income. The United States does not tax U.S. shareholders of corporations, whether foreign or domestic, until the corporation makes a distribution to the shareholder, unless an exception applies, such as when the foreign corporation is a controlled foreign corporation and earns certain types of income. If a U.S. person earns foreign income, he is taxed on that income; however, if a U.S. person is a shareholder of a foreign corporation and that corporation earns foreign income, then, in general, the United States will not tax that income until it is distributed to the U.S. shareholder. In this way a U.S. taxpayer may be able to defer taxes on some foreign income. For example, a U.S.-based multinational business with a Cayman Islands subsidiary earning foreign income may be able to defer U.S. taxes on that foreign income. The income deferred is not limited to income earned in the jurisdiction of incorporation but can be any non-U.S. income. If the foreign income had been earned by a U.S. component of the multinational, U.S. taxes would be owed when that income was earned. Instead, by employing a Cayman Islands subsidiary U.S. taxes are owed when the Cayman Islands subsidiary makes a distribution to the parent. In some instances, U.S.-based parent corporations may be able to defer taxes on foreign-source income from foreign subsidiaries indefinitely by reinvesting that income overseas. Additionally, U.S. parent corporations may further reduce U.S. taxes on foreign income by waiting to bring the income into the United States until a period in which they have domestic losses. Since corporate income tax is based on profits the parent would only owe tax on repatriated income that exceeded its domestic losses. The Internal Revenue Code has provisions limiting this deferment in certain circumstances. For example, if a foreign corporation qualifies as a controlled foreign corporation, then certain U.S. shareholders will not be able to defer tax on certain types of income, known as Subpart F income, earned by that foreign corporation. In other cases, persons may conduct financial activity in jurisdictions without a corporate income tax like the Cayman Islands to avoid entity- level tax. In general, a foreign corporation’s earnings are taxed where earned, in the entity’s jurisdiction of incorporation, or both, depending on the tax laws of the jurisdiction. Since the Cayman Islands has no direct taxes, a corporation organized there will not owe taxes to the Cayman Islands government. For instance, foreign hedge funds sponsored by U.S.- based managers are also generally organized as corporations in tax-neutral jurisdictions like the Cayman Islands to avoid double taxation for foreign investors. Officials we spoke with from the Ex-Im Bank also indicated that one motivation for structuring aircraft-financing leases in the Cayman Islands was the lack of entity-level tax on the entities established to hold the aircraft during the period of the lease. One indication of the extent to which U.S. companies use Cayman Islands entities to defer taxes is their reaction to a recent tax law. In 2004, Congress approved a received dividend deduction for certain earnings of foreign subsidiaries of U.S. companies repatriated for a limited time. Approximately 5.5 percent of the nearly $362 billion repatriated between 2004 and 2006 was from Cayman Islands controlled foreign corporations. The Cayman Islands ranked eighth among all countries in the amount of repatriated income. Another way U.S. persons may use Cayman Islands entities to reduce U.S. tax obligations is to receive investment income in a form that avoids the unrelated business income tax (UBIT). The investment income of U.S. tax- exempt entities, including pension funds, charitable trusts, foundations, and endowments, can be subject to UBIT if it is earned by a U.S. partnership in which the tax-exempt entity is a partner. Many U.S. investment vehicles, such as hedge funds, are organized as limited partnerships because, unlike U.S. corporations, these entities are not generally separately taxed, and as a result, income is only taxed at the level of individual investors. Tax-exempt entities that invest in hedge funds organized as foreign corporations can be paid in dividends, which are not subject to UBIT. If an investment fund is incorporated in a jurisdiction without a corporate income tax, such as the Cayman Islands, the fund’s returns will not be subject to corporate income tax. According to the SEC, the growth in hedge funds has been largely driven by increased investment on the part of U.S. tax-exempt entities. Some U.S. persons may also aggressively interpret U.S. tax law. The U.S. Internal Revenue Code is highly complex, and new strategies to reduce U.S. taxes continue to emerge as business environments change and in response to new rules and guidance. As we have reported before, some have postulated that major corporations’ tax returns are actually just the opening bid in an extended negotiation with IRS to determine a corporation’s tax liability. In some cases, new tax-avoidance practices may emerge that involve complex legal issues. For instance, IRS is examining a strategy used by offshore hedge funds to avoid unfavorable tax consequences of owning U.S. stocks directly. Because many hedge funds are organized in tax-free jurisdictions like the Cayman Islands that do not have income-tax treaties with the United States, investors in these funds are generally subject to full 30 percent withholding rates on certain earnings from U.S. investments such as dividends. However, some hedge funds may have avoided these withholding taxes on dividends by selling their U.S. stocks to a U.S.-based derivatives dealer prior to a dividend payout in exchange for a payment equivalent to the value of the dividend, and then repurchasing the stocks after the payout. Specific tax positions may require complex legal and economic analysis to determine their legality. In particular, transfer pricing by multinational enterprises can pose challenges for IRS and U.S. regulators. IRS officials said that U.S. persons use entities established in many low-tax jurisdictions for transfer-pricing purposes. They also reported that they have dealt with transfer-pricing issues involving Cayman Islands entities, but that the problem is not worse there than in other jurisdictions. While the Internal Revenue Code and Treasury regulations state that transfer prices between related parties must be consistent with transfer prices that would be charged between unrelated parties, some taxpayers may manipulate these prices to obtain favorable tax outcomes in the related context. Additionally, because multinational operations and transactions can be quite complex and pricing methods may be inexact, evaluating the appropriateness of particular transfer prices can be difficult. A recent Treasury report delineates a number of areas in which taxpayers take advantage of ambiguities in rules and legal guidance, aggressively setting transfer prices to move profits offshore and thereby avoid U.S. taxes. In particular, the report found that two types of activities among related parties—cost-sharing arrangements and services transactions—were key sources of transfer-pricing abuse. Further, while Treasury urges caution in interpreting specific aspects of its findings, a recent working paper by Treasury’s Office of Tax Analysis finds that data are consistent with, although not proof of, the existence of potential income shifting from inappropriate transfer pricing. As with other foreign jurisdictions and OFCs, some persons have conducted financial activity in the Cayman Islands in an attempt to avoid discovery and prosecution of illegal activity by the United States. As discussed later in this report, in 45 instances over the past 5 years IRS field agents have requested information from the IRS official responsible for the Caribbean about potential criminal activity on the part of U.S. persons in the Cayman Islands. Additionally, as we further explore later in this report, our review of 21 criminal and civil cases including those referred to us by DOJ, SEC, and IRS shows that U.S. persons have been involved in civil lawsuits and come under criminal investigation for suspected offenses including tax evasion, money laundering, and securities fraud. The full extent of illegal offshore financial activity is unknown, but risk factors include limited transparency related to foreign transactions, and difficulties faced by the U.S. in successfully prosecuting foreign criminal activity. Still, as we state later in this report, IRS officials said that criminal activity was comparatively lower in the Cayman Islands than in some other offshore jurisdictions. Although not unique to the Cayman Islands, limited transparency regarding U.S. persons’ financial activities in foreign jurisdictions contributes to the risk that some persons may use offshore entities to hide illegal activity from U.S. regulators and enforcement officials. Voluntary compliance with U.S. tax obligations is substantially lower when income is not subject to withholding or third-party-reporting requirements. Because U.S.-related financial activity carried out in foreign jurisdictions is not subject to these requirements in many cases, persons who intend to evade U.S. taxes are better able to avoid detection. As an example, foreign corporations established in the Cayman Islands and elsewhere with no trade or business in the United States are not generally required to report dividend payments to shareholders, even if those payments go to U.S. taxpayers. Therefore, a U.S. shareholder could fail to report the dividend payment with little chance of detection by IRS. Persons intent on illegally evading U.S. taxes may be more likely to carry out financial activity in jurisdictions with no direct taxes, such as the Cayman Islands, because income associated with that activity will not be taxed within those jurisdictions. Some U.S. persons have also taken steps to complicate efforts to identify U.S. involvement in illegal activity by structuring their activities in offshore jurisdictions. As with other OFCs, some U.S. persons may create complex networks of domestic and offshore entities in order to obscure their role in illegal schemes. For instance, the defendants in United States v. Taylor and United States v. Petersen pled guilty in U.S. District Court to crimes related to an illegal tax evasion scheme involving offshore entities, including Cayman Islands entities. As part of the scheme, the defendants participated in establishing a “web” of both domestic and offshore entities which were used to conceal the beneficial owners of assets, and to conduct fictitious business activity that created false tax losses, and thus false tax deductions, for clients. Additionally, because offshore entities such as SPVs can be used to achieve a wide array of purposes, they can be abused even when the entities, the parties involved, and the stated business purposes pass scrutiny at the time of establishment. For instance Enron, a global energy company had 441 entities in the Cayman Islands in the year that it filed for bankruptcy. Maples and Calder partners said they created entities for Enron at the instruction of major U.S. law firms. The partners noted that Enron’s legitimate business activity often involved holding assets in offshore subsidiaries, including many in the Cayman Islands. However, Enron did use structured-finance transactions to create misleading accounting and tax outcomes and deceive investors. Maples and Calder partners said they conducted due diligence on investment-fund managers and persons establishing structured-finance entities in accordance with AML/KYC standards, and that they had filed a SAR with regard to suspected illegal activity by Enron. Maples and Calder partners also said that the accounting fraud perpetrated by Enron was not intrinsically offshore in nature; rather, it was committed from within the United States, and that no suggestion of violation of either Cayman Islands law or U.S. law was ever raised with respect to Maples and Calder. The difficulty that U.S. regulators and law-enforcement officials face in investigating and litigating cases may also influence U.S. persons’ choice to conduct illegal activity in offshore jurisdictions. As we have reported, obtaining information on U.S. persons’ financial activities abroad can be time-intensive for IRS, due to issues including difficulty accessing beneficial-ownership information. Additionally, offshore related cases may be time-consuming to litigate. For example, Treasury reports that IRS spends substantial resources to litigate cases involving transfer-pricing abuse by taxpayers. IRS confirms that transfer-pricing cases involve entities established in the Cayman Islands and elsewhere. Transfer-pricing cases can be very time-intensive to litigate because of the highly specialized issues involved, and the results may provide limited guidance for subsequent litigation of transfer-pricing issues due to the unique sets of facts and circumstances involved in each case. Individual U.S. taxpayers and corporations generally are required to self- report their taxable income to IRS. Similarly, publicly owned corporations traded on U.S. markets are required to file annual or quarterly statements with SEC. When an individual or corporation conducts business in the Cayman Islands, there is often no third-party reporting of transactions, so the accuracy of the disclosures to U.S. regulators is dependent on the accuracy and completeness of the self-disclosure. When the U.S. government needs to obtain information from the Cayman Islands, there are formal information-sharing agreements in place to facilitate the exchange of information, in the form of a TIEA or MLAT. In addition, both the U.S. and Cayman Islands governments share information through their respective financial intelligence units. There are also channels for various agencies of each government to share intelligence. IRS and SEC collect self-reported information from individuals and corporations with activity in the Cayman Islands. IRS collects information on the number of controlled foreign corporations, as well as the number of foreign trusts and certain bank accounts owned by U.S. taxpayers overseas, while SEC collects information on publicly owned companies with operations in foreign countries. For example, for tax year 2004, approximately 1,402 foreign corporations in the Cayman Islands were controlled by a U.S. corporate taxpayer, according to IRS data. Those controlled foreign corporations in the Cayman Islands accounted for more than $23 million in average total income, placing them ninth among all jurisdictions in average total income among U.S.-controlled foreign corporations reporting to IRS. Net income earned from controlled foreign corporations in the Cayman Islands ranks thirteenth among all jurisdictions in terms of all foreign corporations controlled by a large corporate U.S. taxpayer. In 2002, the most recent year for which IRS had data, 193 returns were filed by taxpayers indicating that they controlled a trust in the Cayman Islands. This number accounted for over 7 percent of all controlled foreign trusts in 2002. In terms of total income, U.S. tax returns indicating that the taxpayer controlled a foreign trust in 2002 reported about $472 million in income and foreign trusts in the Cayman Islands accounted for nearly 28 percent of that total, or about $132 million. Any U.S. person with signature authority over or a financial interest in an overseas account whose value exceeds $10,000 at any time during a year is required to file a report called a Report of Foreign Bank and Financial Accounts (FBAR) disclosing this information to the Department of the Treasury. Failure to file this information can lead to civil penalties, criminal penalties, or both. For those taxpayers with signature authority over bank accounts in the Cayman Islands, the number of FBAR filings for bank accounts in the Cayman Islands has increased steadily since 2002, rising from 2,677 in 2002 to 7,937 in 2007 (see fig. 7). In November 2007, 732 companies traded on U.S. stock exchanges reported to SEC that they were incorporated in the Cayman Islands. Of these, 309 reported their Cayman Islands address on their filing. As part of their annual SEC filings, companies must also disclose the existence of any significant subsidiaries, either offshore or domestic. As of November 2007, 378 U.S. public companies reported having at least one significant subsidiary in the Cayman Islands. Because only limited third-party reporting is required by financial entities in the Cayman Islands, accuracy and completeness of the information are dependent on the taxpayer. For many taxpayers with domestic transactions and accounts, IRS is able to match expenses and income information provided by a third party to the taxpayer’s return. This approach has been proven to increase U.S. taxpayer compliance. However, Cayman Islands financial institutions are often not required to file reports with IRS concerning U.S. taxpayers. This increases the likelihood of inaccurate reporting by U.S. taxpayers on their annual tax returns and SEC required filings. The likely low level of compliance with these requirements is an example of the general problem with the completeness and accuracy of self-reported information. In addition to the information that both IRS and SEC receive from filers of annual or quarterly reports, the U.S. government also has formal information-sharing mechanisms by which it can receive information from foreign governments and financial institutions. In November 2001, as a result of negotiations between U.S. and Cayman Islands officials, the United States signed a TIEA with the government of the United Kingdom and the government of the Cayman Islands with regard to the Cayman Islands. The TIEA provides a process for IRS to request information related to specific identified taxpayers, their specific transactions, companies, and named associates in respect of both criminal and civil matters, including at the investigative stages. The IRS sends TIEA requests to the Cayman Islands based on internal requests from the Criminal Investigations division, in cases where a taxpayer is under active criminal investigation, or from a revenue agent conducting an examination of a taxpayer. In addition to the TIEA, which is the newest international cooperation channel between the U.S. and the Cayman Islands, the U.S. government and the Cayman Islands also entered into a MLAT in 1986, which entered into force under U.S. law in 1990. The MLAT enables activities such as searches and seizures, immobilization of assets, forfeiture and restitution, transfer of accused persons, and general criminal information exchange, including in relation to specified tax matters. Extradition from the Cayman Islands to the United States is enabled under the United Kingdom’s United States of America Extradition Order of 1976 (as amended in 1986). The TIEA is now the dedicated channel for tax information, while the MLAT remains the channel for the exchange of information with regards to nontax criminal violations. According to a Cayman Islands government official, neither the TIEA nor the MLAT allow for “fishing expeditions.” Rather, as is standard with arrangements providing for exchange of information on request, requests must involve a particular target. For example, IRS cannot send a request for information on all corporations established in the Cayman Islands over the past year. The request must be specific enough to identify the taxpayer and the tax purpose for which the information is sought, as well as state the reasonable grounds for believing the information is in the territory of the other party. Since the TIEA began to go into effect in 2004, IRS has made a small number of requests for information to the Cayman Islands. An IRS official told us that those requests have been for either bank records of taxpayers or for ownership records of corporations . The IRS official also told us that the Cayman Islands government has provided the requested information in a timely manner for all TIEA requests. Since the MLAT went into effect and through the end of 2007, the Department of Justice told us that the U.S. government has made over 200 requests for information regarding criminal cases to the Cayman Islands. A Cayman Islands government official told us that assistance was provided by the Cayman Islands in response to these requests in all but rare instances, and that when a request was refused it was because it did not comply with the specific articles of the treaty. The U.S. government’s financial intelligence unit, FinCEN, works to gather information about suspected financial crimes both offshore and in the United States. As part of the Department of the Treasury, FinCEN is authorized, under the Bank Secrecy Act, to require certain records or reports from financial institutions. Thousands of financial institutions are subject to Bank Secrecy Act reporting and recordkeeping requirements. As part of its research and analysis, FinCEN can make requests of its counterpart in the Cayman Islands, CAYFIN. CAYFIN can and does make requests to FinCEN as well. When FinCEN receives SARs that involve connections to activity in a foreign jurisdiction—such as the Cayman Islands—the agency can investigate by requesting additional information from that jurisdiction’s financial intelligence unit. Cayman Islands law requires SARs from any person who comes across suspicious activity in the course of their trade, employment, business, or profession without limitation to financial institutions. SARs generate leads that law enforcement agencies use to initiate investigations of money laundering and other financial crimes. Similarly, when FinCEN receives reports from institutions within the United States that involve foreign persons it can disclose the information to that country’s financial intelligence unit. Certain U.S. law enforcement and regulatory agencies also have the ability to review SARs generated in the United States. If these agencies proceed with further investigation and require additional specific information from the foreign jurisdiction involved, the SAR-generated information can be used to support an MLAT or TIEA request. FinCEN and CAYFIN routinely share suspicious activity information. In fiscal year 2007, FinCEN made 6 suspicious activity information requests to CAYFIN. From July 2006 to June 2007, CAYFIN made 25 suspicious activity information requests to FinCEN to follow up on potential new leads as well as existing Cayman Islands-generated SARs. From July 2006 to June 2007, CAYFIN shared suspicious activity information with FinCEN in 30 instances, and CAYFIN described 27 of these instances as spontaneous in that CAYFIN disclosed suspicious financial activity with a nexus to the U.S. without receiving a specific request for information from FinCEN. The remaining three information disclosures were responses to requests from FinCEN and were related to active U.S. law enforcement investigations. According to CAYFIN, financial institutions primarily filed suspicious activity reports on U.S. persons for suspicion of fraud-related offenses. Other offenses leading to the filing of suspicious activity reports included drug trafficking, money laundering, and securities fraud, which mostly consisted of insider trading. In addition, according to Cayman Islands officials, statistics regarding SARs filed with CAYFIN show the United States as the most frequent country of subject (30 percent of SARs). In addition to the formal information sharing codified into law between the U.S. government and Cayman Islands government and financial institutions represented by TIEA and MLAT requests and SARs, Cayman Islands officials reported sharing with and receiving information from federal agencies, state regulators, and financial institutions: According to CIMA, 40 requests for assistance were dealt with between 2003 and early 2008, including requests from SEC, Office of the Comptroller of the Currency, Federal Deposit Insurance Corporation, and various state insurance and banking regulators. CAYFIN reported informally sharing information with IRS criminal investigators on several occasions in cases involving predicate offenses such as drug trafficking or securities fraud. CIMA officials reported having traveled to the United States to do due diligence on U.S.-based fund managers/administrators. CIMA reported that other nations’ regulators have traveled to the Cayman Islands to conduct onsite inspections of entities for the purposes of consolidated supervision and Anti-Money Laundering/Combating the Financing of Terrorism (AML/CFT) reviews. While SEC has not conducted such inspections/reviews to date, CIMA indicated that it has provided substantial assistance to SEC over the years and recently facilitated SEC’s conduct of interviews in the Cayman Islands relevant to a current SEC investigation. The Cayman Islands Registrar of Companies maintains a limited amount of publicly available information—company name, type, status, registration date, and address of the registered office—about all Cayman Islands-registered entities. CIMA officials stated that they regularly coordinate with U.S. regulators at the state and federal level, and have several existing agreements that structure the terms of coordination with these agencies. For example, U.S. insurance regulators from Washington State recently negotiated a Memorandum of Understanding (MOU) to share information and coordinate with CIMA regarding cross-border insurance matters. Tax evasion and other illegal activity involving offshore jurisdictions take a variety of forms. Because the activity is offshore, the U.S. government faces additional enforcement challenges. While not unique to the Cayman Islands, ‘hiding income offshore” is fifth on the IRS’s list of 12 most egregious tax schemes and scams for 2008. The IRS list cites several illegal practices, including hiding income in offshore bank and brokerage accounts and foreign trusts, and accessing this income using offshore debit cards, credit cards, and wire transfers. IRS, SEC, and DOJ officials we spoke with described how offshore schemes have been used to facilitate tax evasion, money laundering, and securities violations. To address these issues, IRS’s SBSE, LMSB, and CI Divisions have several initiatives that target abusive offshore transactions, and officials told us that some of the cases that they have identified have involved Cayman Islands connections. Still, a lack of jurisdiction-specific data prevents IRS from knowing the full extent of Cayman Islands activity, and the Cayman Islands was reported to be similar to other offshore jurisdictions with regard to the types of activity that occur there. For example, IRS’s SBSE Division investigates leads referred from other IRS areas, and also actively develops information sources that may assist in identifying new areas of illegal activity. Several initiatives have emerged from these two areas, including programs focused on offshore credit cards, electronic-payment systems, offshore brokerages, and promoters of offshore shelters. A program that we have previously reported on is IRS’s offshore Credit Card Summons project. This program is a compliance initiative that seeks to identify noncompliant taxpayers with offshore bank accounts, investments, and/or other financial arrangements by “following the money” associated with their credit-card transactions. This program has been in effect since 2000, when a federal judge authorized IRS to issue John Doe Summonses to U.S. credit card companies with banks in offshore jurisdictions. IRS officials we spoke with explained that, since its inception in 2000, this program has resulted in completed examinations of over 5,800 returns, almost half of IRS’s FBAR violation caseload, and over $150 million in tax, $26 million in interest and $30 million in penalties. Returns continue to be examined under the program. In addition, officials reported that the program has placed pressure on one credit card company to revoke the ability of an offshore bank in the Bahamas to issue cards, and the Bahamas government to revoke the bank’s license. IRS officials said that some abusive transactions identified through these initiatives involved Cayman Islands entities or accounts, although the exact extent of this involvement was unclear. IRS officials indicated that jurisdiction-specific statistics were not maintained, and thus comprehensive numbers on Cayman involvement in abusive transactions were unavailable. One official also stated that although illegal transactions had been detected, most of the offshore business activity in the Cayman Islands was probably legitimate. The LMSB executive with whom we spoke noted that there is no jurisdiction-specific initiative involving the Cayman Islands. He also said that the type of activity that occurs in the Cayman Islands is similar to that in other offshore jurisdictions. Officials from LMSB described several enforcement initiatives that involve the use of offshore entities by U.S.- related companies and investment funds, and reported that Cayman Islands entities have been involved in activities under investigation by LMSB in a number of cases. For instance, LMSB officials described ongoing investigations related to swap transactions to avoid tax on dividend income, as discussed previously in this report. IRS officials said that the rise of the hedge fund industry has required them to devote resources to evaluating the changed business environment and exploring legal issues associated with strategies by industry participants to reduce U.S. tax burdens. According to IRS, it now has a special team exploring the tax implications of specific hedge-fund activities, including this arrangement, known as a total-return swap. LMSB has activity-specific initiatives for several areas that involve offshore activity, including designated groups with expertise in employment-tax enforcement and transfer-pricing schemes, issues discussed previously in this report. LMSB officials stated that transactions associated with these areas can be highly complex and may involve aggressive but legal interpretations of the U.S. Internal Revenue Code. For instance, LMSB officials said that it is legal for a U.S. company to establish an offshore subsidiary to employ U.S. citizens who work abroad, thereby avoiding Social Security taxes on those workers in some circumstances. However, if IRS finds that a domestic corporation is actually the true employer of the overseas workers, it can challenge the legitimacy of the arrangement, leaving the U.S. corporation liable for Social Security taxes. LMSB officials involved in transfer-pricing enforcement described IRS’s activities in this area, and said that IRS has seen transfer-pricing issues related to the Cayman Islands. They pointed out, though, that Cayman Islands issues were similar to those in any other low-tax jurisdiction. They also described several IRS efforts to counter transfer-pricing abuses, including developing new regulations publishing industry directives and providing guidance to field examiners in cases involving transfer-pricing issues. While some offshore activity amounts to aggressive, but legal, interpretation of the Internal Revenue Code, the U.S. government has identified multiple cases involving civil and suspected criminal activity related to the Cayman Islands. Specifically, the IRS Criminal Investigations Attaché who oversees requests related to the Caribbean reported that over the past 5 years field agents had requested information regarding suspected criminal activity by U.S. persons in 45 instances pertaining to taxpayers or subjects in the Cayman Islands. However, the official also stated that the Cayman Islands had fewer criminal violations than some other offshore jurisdictions. Department of Justice officials told us that DOJ has prosecuted cases involving the use of Cayman accounts and entities. We analyzed 21 criminal and civil cases to identify common characteristics of legal violations related to the Cayman Islands. Among these cases, the large majority involved individuals, small businesses, and promoters, rather than large multinational corporations. While they were most frequently related to tax evasion, other cases involved securities fraud or various other types of fraud. In most instances, Cayman Islands bank accounts had been used, and several cases involved Cayman Islands companies or credit-card accounts. The documentation we reviewed for two of the cases, one referred to us by DOJ and one found in our database searches, mentioned a Maples and Calder connection. DOJ referred to us an ongoing tax case concerning a taxpayer’s participation in a number of sale-in, lease-out transactions, some of which involved Ugland House entities. IRS disallowed the tax benefits of the transaction and the affected party paid the resulting tax assessment and was suing to recover the amount at the time we did our research. A DOJ official said that it did not appear that Maples and Calder initiated or promoted the transactions. In the case found in our search, a hedge fund was established as an entity with Ugland House as its registered office. The U.S. hedge fund founder and manager has admitted fraudulent conduct in the United States in the course of a civil enforcement action brought by the Commodity Futures Trading Commission. The documentation we reviewed contained no allegation that Maples and Calder acted improperly. In neither of these cases did the activity in question occur in the Cayman Islands. A Maples and Calder partner said that the involvement of his law firm in these cases would almost certainly have been limited to establishing the entities in question. SARs also provide useful information about the types of potentially illegal activity U.S. persons conduct in the Cayman Islands. As seen in figure 8, most SARs disclosed by CAYFIN to FinCEN in 2006 and 2007 were related to securities fraud, money laundering, drug trafficking, and other types of fraud. These SARS were all disclosed to the United States at the initiative of CAYFIN. CAYFIN tracks statistics on SARs related to tax issues; however for the years in question, none were reported related to the United States. Officials from Treasury and SEC reported that the Cayman Islands has been cooperative in sharing information and SEC reported that several of the SARs shared have led to U.S. investigations. IRS and DOJ officials stated that particular aspects of offshore activity present challenges related to oversight and enforcement. Specifically, these challenges include lack of jurisdictional authority to pursue information, difficulty in identifying beneficial owners due to the complexity of offshore financial transactions and relationships among entities, the lengthy processes involved with completing offshore examinations, and the inability to seize assets located in foreign jurisdictions. Due to these oversight and enforcement challenges, U.S. persons who intend on conducting illegal activity may be attracted to offshore jurisdictions such as the Cayman Islands. First, jurisdictional limitations make it difficult for IRS to identify potential noncompliance associated with offshore activity. An LMSB Deputy Commissioner said that a primary challenge of U.S. persons’ use of offshore jurisdictions is simply that, when a foreign corporation is encountered or involved, IRS has difficulty pursuing beneficial ownership any further due to a lack of jurisdiction. Specifically, IRS officials told us that IRS does not have jurisdiction over foreign entities without income effectively connected with a trade or business in the United States. Thus, if a noncompliant U.S. person established a foreign entity to carry out non- U.S. business, it would be difficult for IRS to identify that person as the beneficial owner. Additionally, the complexity of offshore financial transactions can complicate IRS investigation and examination efforts. In particular, offshore schemes can involve multiple entities and accounts established in different jurisdictions in an attempt to conceal income and the identity of beneficial owners. For instance, IRS officials described schemes involving “tiered” structures of foreign corporations and domestic and foreign trusts in jurisdictions including the Cayman Islands that allowed individuals to hide taxable income or make false deductions, such as in the cases of United States v. Taylor and United States v. Peterson, as discussed previously. Further, LMSB officials told us they had encountered other instances in which Cayman Islands entities were used in combination with entities in other offshore and/or onshore jurisdictions. One such instance involved an Isle of Man trust used in combination with Cayman bank accounts in order to obscure the beneficial ownership of funds. In another case, a U.S. taxpayer used a Cayman Islands corporation, Cayman Islands bank, U.S. brokerage account, U.S. broker bank, and U.S. bank to transfer funds offshore, control the brokerage account through the Cayman Islands corporation, and ultimately repatriate the funds to his U.S. bank account. One IRS official explained that it can be more useful to “follow the money” rather than follow paper trails when trying to determine ownership and control in such situations. Another challenge facing offshore investigations and prosecutions that we have previously reported on is the amount of time required to complete offshore examinations due to the processes involved in obtaining necessary information. A senior official from DOJ’s Office of International Affairs indicated that the Cayman Islands is the busiest United Kingdom overseas territory with regard to requests for information, but also the most cooperative. She also said that the Cayman Islands is one of DOJ’s “best partners” among offshore jurisdictions. Despite the Cayman Islands government’s cooperativeness, DOJ officials told us that U.S. Attorneys are advised that if any offshore jurisdiction may be involved in a particular case, effort must be made as soon as possible to clarify needed information and initiate requests to obtain that information, in order to have sufficient time to successfully receive and include the information. They said that this is the case even with more cooperative jurisdictions, such as the Cayman Islands, due to the processes involved in making a request. According to Cayman Islands officials, they respond to MLAT requests within an average of six to eight weeks, and their response time for TIEA requests may be shorter. Past GAO work has shown that between 2002 and 2005 IRS examinations involving offshore tax evasion took a median of 500 more calendar days to develop and examine than other examinations. IRS officials from LMSB indicated that the specificity of information needed to make requests was also an inherent limitation involved in investigations of offshore activity. Once noncompliance is determined, one LMSB official said that U.S. authorities cannot seize assets in foreign jurisdictions. Assets can be shared between the U.S. and foreign governments when an agreement exists, though. A DOJ official reported that the Cayman Islands has an agreement to share proceeds of criminal-asset forfeitures with the U.S. government, and has been a very cooperative partner. The Cayman Islands and U.S. governments have shared over $10 million from cases in which the two governments have cooperated, and several million dollars have also been returned to U.S. victims of fraud in other cases and in asset- sharing with the United States since the inception of the MLAT. The Cayman Islands government has taken other steps to address illegal activity by U.S. persons, in addition to supporting and cooperating with U.S. government efforts. For instance, the Cayman Islands has implemented a regulatory regime that IMF has found to be generally in compliance with a wide range of international standards and has been cited by the CFATF as having a strong compliance culture related to combating money laundering and terrorist finance. In addition, CIMA has supervision over various financial institutions in the Cayman Islands, including banks; insurance companies; investment funds; trust companies; and an array of service providers including insurance managers, fund administrators, and corporate-service providers. CIMA officials said that they do not regulate entities differently on the basis of their residence offshore or onshore. CIMA licenses financial institutions and service providers in the Cayman Islands, and CIMA officials said that they consider several factors in determining whether or not to issue a license, such as fit and proper management, ownership and control, compliance with industry requirements, compliance with industry standards, and consolidated- supervision arrangements. In the case of the licensing of branches or subsidiaries of non-Cayman Islands banks, CIMA officials stated that they look to the foreign bank regulator in the bank’s home jurisdiction to ensure that (1) the foreign regulator permits the Cayman Islands branch or subsidiary; (2) that the Cayman Islands operation will be subject to consolidated supervision by the foreign regulator in cooperation with CIMA as host regulator, in compliance with international standards; and (3) that the bank proposing to open a Cayman Islands operation is in good standing with its home-country regulator. CIMA officials said that the same procedures would be applied to any branches or subsidiaries of foreign trust companies that are subject to regulation in their home jurisdictions. CIMA officials said that they take a risk-based approach to supervision of regulated financial activities, consistent with international standards such as the Basel and the International Organization of Securities Commissions (IOSCO) principles. They develop a risk profile for the supervised entity, which then leads to on- and off-site reviews of fund activity. In relation to on-site reviews of fund administrators, CIMA looks at whether the different types of investors are correctly allocated to the intended investment funds; usually done with a 10 percent sample. CIMA officials said that some on-site inspections are done outside the Cayman Islands, such as in New York, Jamaica, and the Bahamas. Off-site reviews of funds include reviewing offering documents, audited financial statements, supervisory returns, and information provided by or available from regulators and other data sources for red flags, such as regulatory breaches, violations of SEC or United Kingdom rules, criminal charges, or any material related to the fund’s appointed service providers. While SEC has not conducted such inspections/reviews to date, CIMA indicated that it has provided substantial assistance to SEC over the years and recently facilitated SEC’s conduct of interviews in the Cayman Islands relevant to a current SEC investigation. In addition, CIMA officials said that captive insurance companies organized in the Cayman Islands must meet certain requirements, such as submitting a sound business plan, revealing beneficial ownership under KYC rules, and identifying third-party administrators and actuaries. Applicants first find an insurance manager in the Cayman Islands or establish and staff a principal office in the Cayman Islands. Once the entity is licensed, the manager provides audited annual financial statements (an interim report if the next annual audit is longer than 12 months away) and other supervisory returns. CIMA officials said that they meet with each company and the insurance manager every 18 to 24 months. Finally, CIMA requires audits of its regulated entities to be submitted within a prescribed time frame, and although the Cayman Islands has no direct taxation, CIMA officials said that if an auditor saw a clear criminal violation of another nation’s tax laws, CIMA would expect that to be in the auditor’s report and would take it into account in any invocation of its regulatory powers. Further, if at the licensing stage there are any concerns or lack of clarity about the proposed business activity, from a tax (or any other) perspective, then CIMA officials told us that CIMA would require the applicant to submit a professional legal opinion on the tax aspects of the activity. In addition to administering regulatory safeguards, Cayman government officials from the Financial Secretary’s Office told us that they act to implement regulatory standards and close loopholes when identified. For example, they described a previous action by the Cayman government to prohibit the establishment of shell banks. Cayman Islands government officials and Maples and Calder representatives stated that their role in helping the United States ensure compliance with U.S. tax laws is necessarily limited. While government officials stated that seeking to legally reduce or avoid U.S. taxes would not be a legitimate reason to prohibit the establishment of a company or trust in the Cayman Islands, if it was clear that the entity was being set up as part of a scheme to evade taxes or violate other U.S. laws, that activity would be recognized as illegitimate and would not be allowed. As a matter of policy, and practically, the Financial Secretary and Deputy Secretary stated that the Cayman Islands government cannot administer other nations’ tax laws and are not aware of any jurisdiction that undertakes such an obligation as a general matter. They told us that until a request is made by the United States for tax-related assistance, the Cayman Islands government is “neutral” and does not act for or against U.S. tax interests. They said that at the point that a request is made, the Cayman Islands can be relied upon to provide appropriate assistance. They also said that the Cayman Islands would not be opposed to further agreements with the United States regarding tax information sharing if the international norms and standards supported such efforts, but that there would need to be a clear justification for such agreements. Senior partners from Maples and Calder that we spoke with stated that complying with U.S. tax obligations is the responsibility of the U.S. persons controlling the offshore entity, and that they require all U.S. clients to obtain onshore counsel regarding tax matters before they will act on their behalf. They added that they are not qualified to advise on U.S. tax laws nor is it their role to enforce them, just as is the case for U.S. lawyers when it comes to the tax laws of other countries. Ugland House provides an instructive case example of the tremendous challenges facing the U.S. tax system in an increasingly global economy. Although the Maples and Calder law firm provides services that even U.S. government-affiliated entities have found useful for international transactions and the Cayman Islands government has taken affirmative steps to meet international standards, the ability of U.S. persons to establish entities with relatively little expense in the Cayman Islands and similar jurisdictions facilitates both legal tax minimization and illegal tax evasion. Despite the Cayman Islands’ adherence to international standards and the international commerce benefits gained through U.S. activities in the Cayman Islands, Cayman entities nevertheless can be used to obscure legal ownership of assets and associated income and to exploit grey areas of U.S. tax law to minimize U.S. tax obligations. Further, while the Cayman Islands government has cooperated in sharing information through established channels, as long as the U.S. government is chiefly reliant on information gained from specific inquiries and self-reporting, the Cayman Islands and other similar jurisdictions will remain attractive locations for persons intent on engaging in illegal activity. Balancing the need to ensure compliance with our tax and other laws while not harming U.S. business interests and also respecting the sovereignty of the Cayman Islands and similar jurisdictions undoubtedly will be a continuing challenge for our nation. We provided a draft of this report to the Commissioner of Internal Revenue, the Secretary of the Treasury, and the Leader of Government Business of the Cayman Islands for review and comment. IRS and the Cayman Islands government provided technical comments, which we incorporated as appropriate. In a letter to GAO, the Cayman Islands Leader of Government Business expressed appreciation for the opportunity to review and comment on the draft report. He said that the report generally presents an accurate description of the Cayman Islands’ legal and regulatory regime and assists in clarifying the nature of activity that takes place in the Cayman Islands. The letter from the Cayman Islands Leader of Government Business can be found in appendix I. We will send copies of this report to the Secretary of the Treasury, the Commissioner of Internal Revenue, and other interested parties. This report is available at no charge on GAO’s web site at http://www.gao.gov. If you or your staff have any questions, please contact me at (202)512-9110. I can also be reached by e-mail at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix II. In addition to the contact person named above, David Lewis, Assistant Director, Perry Datwyler, S. Mike Davis, Robyn Howard, Brian James, Danielle Novak, Melanie Papasian, Ellen Phelps Ranen, Ellen Rominger, Jeffrey Schmerling, Shellee Soliday, A.J. Stephens, Jessica Thomsen, and Jonda VanPelt made key contributions to this report. | The Cayman Islands is a major offshore financial center and the registered home of thousands of corporations and financial entities. Financial activity in the Cayman Islands is measured in the trillions of dollars annually. One Cayman building--Ugland House--has been the subject of public attention as the listed address of thousands of companies. To help Congress better understand the nature of U.S. persons' business activities in the Cayman Islands, GAO was asked to study (1) the nature and extent of U.S. persons' involvement with Ugland House registered entities and the nature of such business; (2) the reasons why U.S. persons conduct business in the Cayman Islands; (3) information available to the U.S. government regarding U.S. persons' Cayman activities; and (4) the U.S. government's compliance and enforcement efforts. GAO interviewed U.S. and Cayman government officials and representatives of the law firm housed in Ugland House, and reviewed relevant documents. The sole occupant of Ugland House is Maples and Calder, a law firm and company-services provider that serves as registered office for the 18,857 entities it created as of March 2008, on behalf of a largely international clientele. According to Maples partners, about 5 percent of these entities were wholly U.S.-owned and 40 to 50 percent had a U.S. billing address. Ugland House registered entities included investment funds, structured-finance vehicles, and entities associated with other corporate activities. Gaining business advantages, such as facilitating U.S.-foreign transactions or minimizing taxes, are key reasons for U.S. persons' financial activity in the Cayman Islands. The Cayman Islands' reputation as a stable, business-friendly environment with a sound legal infrastructure also attracts business. This activity is typically legal, such as when pension funds and other U.S. tax-exempt entities invest in Cayman hedge funds to maximize their return by minimizing U.S. taxes. Nevertheless, some U.S. persons have used Cayman Island entities, as they have entities in other jurisdictions, to evade income taxes or hide illegal activity. Information about U.S. persons' Cayman activities comes from self-reporting, international agreements, and other sharing with the Cayman government. The completeness and accuracy of self-reported information is not easily verified. While U.S. officials said the Cayman government has been responsive to information requests, U.S. authorities must provide specific information on an investigation before the Cayman government can respond. The Internal Revenue Service has several initiatives that target offshore tax evasion, including cases involving Cayman entities, but tax evasion and crimes involving offshore entities are difficult to detect and to prosecute. Cayman officials said they fully cooperate with the United States. Maples partners said that ultimate responsibility for compliance with U.S. tax laws lies with U.S. taxpayers. U.S. officials said that cooperation has been good and that compliance problems are not more prevalent there than elsewhere offshore. |
State’s Foreign Affairs Manual requires that U.S. government Foreign Service and Civil Service employees under chief-of-mission authority with assignments or short-term TDY to designated high-threat countries complete FACT training before deploying. USAID’s Automated Directives System additionally requires FACT training for USAID’s U.S. personal services contractors deploying to these posts. The 1-week course, held at a U.S. location, provides practical, hands-on instruction in topics such as detection of surveillance, familiarization with firearms, awareness of improvised explosive devices, and provision of emergency medical care (see fig. 1 for other examples of topics addressed in FACT training). To ensure that personnel are prepared to confront current risks in high- threat environments, State requires that personnel complete the course every 5 years and updates the FACT training curriculum periodically to reflect changing threats abroad. For example, in 2009, the ambassador to one high-threat country noted that personnel needed to be familiar with the sound of sirens announcing a rocket attack and with the physical features of protective bunkers, in part because personnel were injuring themselves when entering the bunkers. In response, State’s Diplomatic Security Training Center added two bunkers at its training facility and began conducting duck-and-cover exercises with recorded sirens. State also revised the FACT training curriculum in 2013 to include instruction on helicopter operations, vehicle rollover training, and evacuation from a smoke-filled environment. State initially established the FACT training requirement for personnel in one country in 2003 and extended it to eight more countries over the next 9 years. In June 2013, State doubled the number of countries for which it required FACT training. Prior to that date, the requirement applied to assigned personnel and those on short-term TDY in a designated high- threat country for 30 or more cumulative days in a 365-day period (with the exception of personnel on short-term TDY to one country, where State required FACT training for personnel with 60 cumulative days or more of TDY status). In December 2012, the Accountability Review Board, which State convened to investigate the attacks on the mission in Benghazi, Libya, recommended that FACT training be required for personnel assigned to all high-threat, high-risk countries. An independent panel established as a result of the Accountability Review Board also identified training as critical to State’s ability to ensure a safe and secure environment for employees. In response to the Accountability Review Board’s recommendation, in June 2013, State issued a memorandum notifying U.S. agencies that it was increasing the number of countries for which it requires FACT training from 9 to 18. State also required that employed eligible family members of personnel in all designated high-threat countries complete FACT training and changed the requirement for short-term TDY personnel. Under the new requirement, short-term TDY personnel must take FACT training if they spend more than 45 cumulative days in a calendar year in one or more of the designated countries. 13 FAM 322(b). training. In extraordinary circumstances, State may grant a waiver of the FACT training requirement to individuals on a case-by-case basis. Using data from multiple sources related to State and USAID assigned personnel, we determined that 675 of 708 State personnel and all of the 143 USAID personnel on assignments to the designated high-threat countries on March 31, 2013, were in compliance with the FACT training requirement. We found 33 State assigned personnel who were not in compliance with the mandatory training requirement. We were unable to assess compliance among short-term TDY personnel because of gaps in State’s data. First, State has not established a mechanism to identify the universe of short-term TDY personnel who are required to take FACT training. Second, State’s eCountry Clearance (eCC) system—the most comprehensive data source for identifying short-term TDY personnel granted country clearance to high-threat posts—has limitations. According to GAO’s Standards for Internal Control in the Federal Government, program managers need operating information to determine whether they are meeting compliance requirements. Based on our review of available data, we found that 675 of 708 State personnel and all of the 143 USAID personnel in the designated high- threat countries on March 31, 2013, complied with the FACT training requirement. We identified 33 noncompliant State personnel. According to State officials, of the 22 noncompliant individuals in one country, 18 were State personnel’s employed eligible family members who were required to take the training; State officials explained that these individuals were not aware of the requirement at the time. The officials noted that enrollment of family members in the course is given lower priority than enrollment of direct-hire U.S. government employees but that space is typically available. In addition, a senior official at the embassy in another country as of March 31, 2013, did not complete FACT training before or during his tenure. According to State officials, this resulted from a pressing situation in the country; however, this individual did not receive a waiver from the FACT training requirement. State provided a variety of reasons why the remaining 10 personnel were not in compliance with the requirement (see app. III for more information). Because State does not maintain a single source of data on assigned U.S. personnel who are required to complete FACT training, State or an independent reviewer has to obtain and reconcile information from three State databases—State’s Global Employment Management System (GEMS), Student Training Management System, and Post Personnel—as well as other sources to assess the extent of compliance among all U.S. agencies with the FACT training requirement. In addition, limitations in these various State data systems containing information needed to assess compliance make it difficult to readily identify personnel subject to the FACT training requirement. As a result, the data are not readily available for decision making and require resource- and labor-intensive efforts to determine compliance. According to internal control standards, program managers need operating information to determine whether they are meeting compliance requirements. The following is a list of the current data systems and the limitations associated with each. Global Employment Management System. GEMS—the centralized personnel database for State’s U.S. direct-hire personnel and employed eligible family members—identifies direct-hire personnel on assignments, including assignments to the designated high-threat countries. However, GEMS does not consistently identify State personnel who are required to complete FACT training. In some cases, the GEMS data that we reviewed included dates for employees’ arrival at post; in other cases, it included employees’ hire dates. As a result, to determine whether these personnel had completed FACT training within 5 years of arrival at the posts, additional employee profile information must be obtained from State’s Bureau of Human Resources. Moreover, GEMS data do not contain a field identifying whether certain personnel are subject to the FACT training requirement based on their employment type or position. For example, the GEMS data do not readily identify eligible family members, some of whom were required to complete FACT training. As a result of this limitation, additional evidence is required to verify whether they were subject to the FACT training requirement. Student Training Management System. State’s Student Training Management System is the official system of record for State, USAID, and other agency personnel’s FACT training completion dates. Although the Student Training Management System shows personnel who enrolled in FACT training, it does not consistently include training completion dates. For example, the data that we reviewed lacked FACT training completion dates for 9 of 143 assigned USAID personnel. State officials could not provide a definitive explanation for the absence of these completion dates in the training management system. For these nine records, State and USAID officials had to provide alternate forms of evidence, such as Foreign Service Institute training transcripts or FACT training rosters, to show that personnel had completed the training. Post Personnel. State’s Post Personnel system—a database managed by the overseas posts—contains information on all U.S. executive branch personnel assigned under chief-of-mission authority. Therefore, we reviewed data from Post Personnel to identify some USAID personnel who were required to complete FACT training, because USAID officials told us that data maintained in USAID’s internal staffing report for March 31, 2013, might not include all personal services contractors at the designated high-threat countries. We attempted to compare Post Personnel records for USAID personnel with FACT training completion records in State’s Student Training Management System to examine compliance among USAID personnel. However, we found that the Post Personnel data excluded 18 USAID assigned personnel who, according to a USAID staffing report, were assigned to countries for which State required FACT training on March 31, 2013. According to State officials, Post Personnel is not an official or reliable system of record for information. State is in the process of developing a new, centralized system to replace Post Personnel—the Overseas Personnel System—that will track the location of all U.S. employees stationed at overseas posts and is expected to become operational in February 2014, according to State Bureau of Human Resources officials. However, according to agency officials, the new system will not integrate data from State’s Student Training Management System or other personnel data systems. Gaps in State’s data on U.S. short-term TDY personnel at high-threat countries make it impossible for State or an independent reviewer to readily assess compliance with the FACT training requirement among State and USAID short-term TDY personnel who are required to complete FACT training. First, State does not systematically maintain data on the universe of U.S. personnel on short-term TDY status in designated high- threat countries who are required to complete the training. Second, neither the eCC system—the primary mechanism that State and USAID short-term TDY personnel generally use to request country clearance— nor any other State system is set up to permit those responsible for granting country clearance to determine whether the personnel requesting clearance have reached the cumulative 45 days threshold in one designated country or across multiple high-threat countries. Determining whether U.S. personnel have reached, or will reach, the cumulative-days threshold is a key factor in identifying short-term TDY personnel who should have completed FACT training before receiving country clearance. We attempted to use data from the eCC system and State’s Student Training Management System to identify the universe and determine compliance with the FACT training requirement for short-term TDY personnel. However, we were unable to do so because we could not obtain a reliable or comprehensive universe of short term TDY personnel in designated high-threat countries. Although we attempted to overcome these data limitations by examining other State records, we ultimately concluded that the available data and records on State and USAID short- term TDY personnel were not sufficiently reliable to determine the extent of compliance for these personnel. This lack of reliable data is inconsistent with internal control standards, which state that agencies need operating information to determine whether they are meeting their compliance requirements. Without such data, agencies cannot determine the extent to which personnel comply with the mandatory training requirement. Several weaknesses in State oversight of personnel’s compliance with the FACT training requirement limit State’s ability to help ensure that all personnel subject to the requirement are prepared for service in high- threat countries. First, State’s policy manual is outdated, and certain guidance related to FACT training is inconsistent and unclear. Second, State and USAID management personnel responsible for assigning civilian personnel and granting them country clearance do not consistently verify FACT training completion before the personnel deploy to high-threat countries. Third, State management does not monitor or evaluate overall levels of compliance with the training requirement. Effective management of an organization’s workforce is essential to achieving results and an important part of internal control. Only when the right personnel for the job are on board and are provided the right support, including training, is operational success possible. State has not updated the Foreign Affairs Manual to reflect changes made to the FACT training requirement in June 2013. In addition, State’s eCC system provides inconsistent and unclear instructions to employees regarding the FACT training-related information they must include on eCC request forms. Moreover, we found that State’s guidance regarding the required frequency of FACT training is unclear. According to Standards for Internal Control in the Federal Government, information should be recorded and communicated to management and others in the agency in a form and within a time frame that enables them to carry out their responsibilities. Without up-to-date, consistent, and clear guidance, agencies and personnel may not have the information needed to ensure compliance with the FACT training requirement. As of January 2014, State had not updated its Foreign Affairs Manual to reflect its policy change, announced in June 2013, that (1) increased the number of designated high-threat countries requiring FACT training from 9 to 18, (2) changed the cumulative-days threshold for the FACT training requirement from 30 or more cumulative days in one high-threat country to more than 45 cumulative days in one or more of the countries, and (3) described the conditions that indicate whether eligible family members must complete the course. Although State informed other federal agencies and personnel under chief-of-mission authority about the changes to the FACT training requirement in a June 2013 memorandum and a July 2013 cable, respectively, we found that USAID was not implementing changes to the requirement as of January 2014. USAID officials told us that they had not implemented the revised requirement in part because they were made aware that the State’s Bureau of Diplomatic Security did not have the capacity to train staff assigned to the newly designated high-threat countries; therefore, they believed the changes had not taken effect. Updating the Foreign Affairs Manual is important given that USAID revised its Automated Directives System guidance in June 2013 to provide a specific cross-reference to the FACT training requirement in the USAID did this to help ensure that its staff are Foreign Affairs Manual.aware of the designated high-threat locations for which FACT training is required, in response to a GAO letter sent to USAID in April 2013 highlighting gaps in its Automated Directives System guidance. Instructions in the eCC system—as previously noted, the primary mechanism generally used to request and grant country clearance to short-term TDY personnel—included the FACT training requirement for eight of the nine designated high-threat countries that we reviewed (see fig. 2). However, instructions contained in the eCC system provide inconsistent or unclear guidance to U.S. personnel. This occurs because State headquarters has not provided instructions to posts in designated high- threat countries regarding the FACT training documentation that personnel should include in the clearance requests that they submit through the eCC system. Rather, State has granted each overseas post responsibility for developing its country requirements information in the system. According to State officials, each post is responsible for maintaining and updating its own requirements. Thus, some inconsistencies have occurred. Figure 3 illustrates the inconsistency in eCC system instructions for designated high-threat countries. Examples of inconsistency and lack of clarity in eCC country instructions that we reviewed include the following: The eCC instructions for four countries required employees to record their FACT training completion dates on the eCC request or provide evidence of FACT training before they arrive at post. The eCC instructions for the other four relevant high-threat countries did not require employees to provide any documentation of FACT training before they arrive at post (see fig. 3). The instructions for three countries required employees to specify the dates of training; instructions for the fourth country required employees to provide evidence of training completion but do not specify what evidence should be provided—such as a completion date or training completion certificate—or how or to whom the evidence should be provided. The eCC instructions for only two countries required officials from the sponsoring agency or office to ensure that employees completed FACT training before arriving at post. The eCC instructions for one of these countries required the employee’s sponsoring agency to provide written verification that the employee has completed FACT training; the instructions for the other country required the employee’s sponsoring agency to ensure employees have FACT training before traveling to post. We also found other State guidance regarding the FACT training requirement to be inconsistent and unclear. Specifically, State officials have provided inconsistent guidance regarding the frequency with which personnel must repeat FACT training. According to State’s Bureau of Diplomatic Security, as of November 2013, FACT training must be current through the end of an assignment to a designated high-threat country; that is, the FACT training certificate must remain valid for the duration of the personnel’s entire tour of duty. In contrast, according to State Orientation and In-Processing Center (OIP) officials and instructions, all required training must be completed within 5 years of arrival at the post, and the period since the last training may exceed 5 years during the assignment. In October 2013, State officials said that they were in the process of clearing a “Frequently Asked Questions” cable intended to resolve the inconsistency in the guidance, but the officials did not provide a date when the cable would be issued. Although State and USAID have processes in place to notify assigned and short-term TDY personnel of the FACT training requirement and to enroll them in training, neither agency consistently verifies completion of the training before its respective personnel deploy to high-threat countries. While State’s Foreign Affairs Manual notes that it is each employee’s responsibility to ensure his or her compliance with the FACT training requirement, the manual also states that agency management is responsible for ensuring adequate controls over all department operations. Without such controls for ensuring FACT training compliance, State increases the risk that personnel may deploy to the designated high-threat countries without completing the mandatory training. State OIP verifies that all State and USAID assigned personnel departing for five of the designated high-threat countries have complied with the FACT training requirement, and USAID verifies compliance with the requirement for assigned USAID personnel departing for two additional countries. State does not verify compliance for personnel assigned to the other designated high-threat countries (see fig. 4). Verification of FACT training compliance varies across countries. U.S. personnel with assignments to five countries submit eCC travel requests that are routed through OIP to the posts. According to OIP officials, OIP forwards the eCC request form to the posts only after verifying, using State’s Student Training Management System, that personnel have completed FACT training within 5 years of the assignment start date. USAID generally verifies whether personnel on assignments to two additional countries have completed FACT training before arrival by requesting their FACT training completion certificate. State and USAID generally do not verify short-term TDY personnel’s completion of FACT training before deployment. We found two instances—USAID in two countries—where agency officials told us that they verified FACT training compliance before short-term TDY personnel arrived at post.FACT training completion for either of these countries. Furthermore, State and USAID officials told us that they do not obtain evidence of FACT training completion for short-term TDY personnel for six additional countries. For the remaining country, although the eCC instructions call for personnel to provide evidence of FACT training, post officials told us that they do not review the evidence provided. Moreover, according to a State official responsible for managing the eCC system, the department has issued no documented protocol or standard operating procedure to posts regarding reviewing and approving eCC requests to ensure compliance with the FACT training requirement. In contrast, State officials told us that they do not verify Although State OIP helps to ensure that eCC requests submitted by short-term TDY personnel traveling to two countries include FACT training completion dates, the office does not verify that the dates are accurate. In addition, OIP does not review eCC requests or verify FACT training compliance for short-term TDY personnel traveling to the three other high-risk countries for which OIP supports short-term TDY travel. According to OIP officials, OIP tested the accuracy of a random sample of 34 employee eCC records over a 2-week period and concluded that it is unnecessary to verify eCC data on FACT training completion, based on We believe that OIP’s methodology for conducting this the test’s results.test is not sufficient or reliable for determining that no further verification is necessary. While OIP’s sample provided anecdotal insights about employees’ recorded training dates, the sample results were not generalizable to the entire population. Thus, conclusions about the population’s accuracy based on OIP’s sample are not appropriate. To verify short-term TDY personnel’s compliance with the FACT training requirement, agencies would need to determine how many days within a calendar year personnel have traveled to designated high-threat countries. In June 2013, the department changed the cumulative-days requirement to 45 cumulative days or more in a calendar year in any of those countries. However, State’s data systems do not enable personnel to view an employee’s history of travel to more than one of the designated countries. For short-term TDY personnel traveling to two specific countries, State officials noted that OIP officials use the eCC system to determine cumulative numbers of days traveled within each country; however, they cannot determine personnel’s cumulative numbers of days traveled across countries. Furthermore, in-country officials who approve country clearance requests can generally view employees’ travel histories only for that particular country and generally do not have eCC system authority to look at employee travel histories for other countries. According to State officials, the department has no plans to ensure short- term TDY personnel’s compliance with the 45 cumulative-day requirement. The officials said that (1) the process to view personnel’s travel records is labor intensive and (2) they would not want to burden post officials by asking them to track personnel’s cumulative travel days. We realize that this would be a difficult task, but one of the first steps to ensuring compliance with the requirement is to collect the necessary data. State, which is primarily responsible for the safety and security of U.S. government personnel on official duty abroad, has not monitored or evaluated the overall extent of compliance with the FACT training requirement among assigned and short-term TDY personnel. According to State officials, an agency working group began exploring options in November 2013 for a process to validate and track FACT training, which is intended to ensure an additional measure of compliance with the requirement. While State’s Foreign Affairs Manual notes that it is each employee’s responsibility to fulfill the FACT training requirement, the manual also notes that agency management is responsible for ensuring adequate controls over agency operations and that all managers are responsible for maintaining and monitoring systems of management control. In addition, according to Standards for Internal Control in the Federal Government, agency management should assess the quality of performance over time and, based on these reviews, should ensure that any findings from audits and other reviews are promptly resolved. The lack of monitoring prevents State officials from identifying systematic deficiencies in their efforts to ensure FACT training compliance. The presence of U.S. personnel at overseas posts is critical to promoting U.S. interests and assisting foreign partners. This includes the deployment of personnel to high-threat countries where they may be targeted by Al Qaeda, its affiliates, and other violent extremist organizations. State has identified the need, and established the requirement, for employees to complete a mandatory security course (FACT) to better prepare employees for work in high-threat environments. The lack of security awareness training by one individual could put not only his or her own safety in jeopardy but could also place at risk the safety of others serving in an overseas high-threat country. While State’s Foreign Affairs Manual notes that it is the employee’s responsibility to ensure compliance with the FACT training requirement, it also notes that agency management is responsible for ensuring adequate controls over all agency operations. Accordingly, it is essential that State provide U.S. personnel with access to the most up-to-date information on the requirements for training, establish mechanisms to identify the individuals who are required to complete FACT training, and ensure that they are in compliance with the requirements. State has not yet updated the Foreign Affairs Manual to reflect changes related to the FACT training requirement from June 2013, provided consistent and clear policy guidance, or monitored compliance with the requirement. Undertaking these steps is essential to ensuring that all U.S. personnel deploying to designated high-threat countries are adequately prepared for challenges associated with working in a high-threat security environment. We are making 10 recommendations to the Secretary of State and one recommendation to the USAID Administrator. To ensure that State’s policy guidance reflects the June 2013 mandatory FACT training requirements and provides clear information to U.S. agencies on which personnel are required to take FACT training, we recommend that the Secretary of State update the Foreign Affairs Manual to reflect the nine additional countries that were added in June 2013, reflect the requirement for all eligible family members assigned or on short-term TDY to the designated high-threat countries to complete FACT training before deployment, indicate that short-term TDY personnel who spend more than 45 cumulative days in a calendar year at one or more of the designated posts are required to complete FACT training, and clarify whether FACT training completion must be valid during an employee’s entire assignment or short-term TDY visit. To strengthen State’s ability to ensure that U.S. civilian personnel are in compliance with the FACT training requirement, we recommend that the Secretary of State identify a mechanism to readily determine the universe of assigned U.S. civilian personnel under chief-of-mission authority who are required to complete FACT training; identify a mechanism to readily determine the universe of short-term TDY U.S. civilian personnel who are required to complete FACT training—specifically, required personnel who have spent 45 days or more in the designated high-threat countries in a calendar year; ensure that eCC instructions regarding the documentation of the FACT training requirement for short-term TDY personnel are consistent for all designated high-threat countries; take steps to ensure that management personnel responsible for assigning personnel to designated high-threat countries consistently verify that all assigned U.S. civilian personnel under chief-of-mission authority who are required to complete FACT training have completed it before arrival in the designated high-threat countries; take steps to ensure that management personnel responsible for granting country clearance consistently verify that all short-term TDY U.S. civilian personnel under chief-of-mission authority who are required to complete FACT training have completed it before arrival in the designated high-threat countries; and monitor or evaluate overall levels of compliance with the FACT training requirement among U.S. civilian personnel under chief-of- mission authority who are subject to the requirement. We recommend that the USAID Administrator take steps to ensure that all USAID short-term TDY personnel who are required to take FACT training complete the training before arrival in the designated high-threat countries, as USAID has done for its assigned personnel. We provided a draft of this report to State and USAID for their review and comment. State and USAID provided written comments, which we have reprinted in appendices IV and V, respectively. State also provided technical comments, which we incorporated as appropriate. State fundamentally concurred with our recommendations. State noted that it has efforts underway to address these recommendations. In addition, State noted that it established a working group chaired by the Executive Assistant for the Under Secretary for Management in November 2013 to identify areas where improvements can be made, such as in the areas of notification, enrollment, and tracking of FACT training. State indicated that it plans to distribute internal and external guidance on this issue and document all existing and new measures accordingly. USAID did not specifically agree or disagree with our recommendation that it take steps to ensure that all USAID short-term TDY personnel who are required to take FACT training complete the training before arrival in the designated high-threat countries. USAID noted challenges in verifying whether its short-term TDY personnel are in compliance with the FACT training requirement and indicated that the onus is on the employee to fulfill the requirement. However, USAID indicated that it plans to take some steps to assist employees in tracking their compliance with the requirement. Specifically, USAID stated that every country clearance will include (1) a reminder to employees about the FACT training requirement and (2) a statement that employees must keep track of their cumulative days of travel to FACT training posts, so that they do not exceed 45 days at FACT training posts without getting the required training. USAID also noted that its efforts will include a warning that if an employee fails to meet this responsibility, he or she could be subject to discipline. We are sending copies of this report to the appropriate congressional committees, the Secretary of State, and the USAID Administrator. In addition, the report is available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7331 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. This report reviews issues related to compliance with the Department of State’s (State) Foreign Affairs Counter Threat (FACT) training, including internal controls to ensure compliance. Specifically, we examined (1) compliance with the FACT training requirement among civilian personnel employed by State and the U.S. Agency for International Development (USAID) and (2) State’s and USAID’s oversight of their personnel’s compliance with the requirement. Our work focused on personnel at State and USAID, the two primary foreign affairs agencies with civilian personnel in the countries for which State requires FACT training. We met with officials from relevant components of these agencies. For State, we met with officials from the Bureau of Diplomatic Security; Bureau of Human Resources; Office of Management Policy, Rightsizing and Innovation; Orientation and In- Processing Center; relevant headquarters-based regional bureaus; and the Foreign Service Institute. We interviewed State officials in seven of the nine countries for which State required FACT training as of March 2013. For USAID, we met with officials from the agency’s Office of Human Resources, the Office of Security, and regional offices for designated high-threat countries. We also interviewed USAID officials in five of the nine countries that were designated as high threat as March 13, 2013. Although we include Afghanistan in our discussion of State and USAID processes to ensure compliance, we did not assess compliance among State and USAID personnel in Afghanistan because we have previously reported on the extent of compliance with the FACT training requirement among U.S. personnel assigned to that country. We evaluated the extent of compliance among State and USAID assigned and short-term temporary duty (TDY) personnel in the countries for which State or USAID required FACT training as of March 31, 2013, against the relevant provisions in State’s Foreign Affairs Manual (13 FAM 321-323) and USAID’s Automated Directives System (ADS 458). To do so, we collected available personnel and training data for State and USAID assigned and short-term TDY personnel who were in the designated countries on March 31, 2013. We chose this date because it was a recent date during our initial planning of the review. We assessed compliance with the requirement among State and USAID direct-hire personnel and U.S. personal services contractors, who are required to complete FACT training per State’s Foreign Affairs Manual and USAID’s Automated Directives System. We also assessed the extent of compliance among employed eligible family members of personnel assigned to relevant designated posts in three countries, the only locations in our compliance assessment for which State specified that employed eligible family members were required to complete FACT training, as of March 31, 2013. We excluded Diplomatic Security special agents and security protective specialists from our analysis, because State’s guidance specifically exempts these personnel from the FACT training requirement. We excluded regional security officers, deputy regional security officers, and assistant regional security officers from our analysis, because, according to State, these personnel are categorized as Diplomatic Security special agents and as such are exempt from the FACT training requirement. In addition, we excluded personnel who were at the designated country or post on March 31, 2013, but who arrived before the FACT training requirement took effect for that country. Because FACT training is valid for a 5-year period, we obtained FACT training completion data for 2008 through 2013 from State’s Global Employment Management System (GEMS) and Student Training Management System (STMS). For State and USAID assigned personnel who did not complete FACT training, we asked State and USAID officials to provide one of the following: (1) evidence that these individuals completed equivalent training—for example, an employee profile or training roster indicating completion of either of two courses, Security for Non-traditional Operating Environments or the diplomatic security antiterrorism course designed for one country; (2) evidence that these individuals were otherwise exempt from the requirement—for example, an employee profile illustrating that an employee was a Diplomatic Security special agent; or (3) an explanation for the noncompliance of personnel who were not exempt but failed to comply with the requirement. We used data from GEMS and State’s Bureau of African Affairs to compile information for State assigned personnel who were in the designated countries on March 31, 2013. All State records obtained from GEMS contained FACT training completion dates for assigned personnel who had completed the training; we used these completion dates in our compliance review. For USAID assigned personnel, we used data from State’s Post Personnel system and a USAID staffing report for March 31, 2013. The USAID personnel data obtained from State’s Post Personnel system did not contain FACT training completion dates. For our compliance review, we assessed the USAID personnel records against FACT training completion records in STMS. To determine whether a USAID assigned personnel record could be categorized as a match with an STMS record containing a FACT training completion date, we compared data for the employees’ first and last names and available data for partial social security numbers and partial dates of birth. To assess the reliability of the data that we used for assessing State and USAID assigned personnel’s compliance with the FACT training requirement, we interviewed agency officials responsible for compiling these data or maintaining the systems that generated the data, and we performed basic reasonableness checks of the data for obvious inconsistencies and gaps. When we found discrepancies or missing data fields, we brought them to the attention of relevant agency officials and worked with the officials to correct the discrepancies and missing fields. To assess the comprehensiveness of USAID Post Personnel data, we compared data for USAID assigned personnel with data from USAID’s staffing report for March 31, 2013. We found data on State and USAID assigned personnel to be sufficiently reliable for the purpose of our report. We were unable to conduct an assessment of the extent of compliance with the FACT training requirement among short-term TDY personnel because we determined that the data were not sufficiently reliable for the purposes of our report. We worked for 7 months with various entities within State and at USAID to obtain data identifying the universe of short- term TDY personnel in the designated countries as of March 31, 2013, who were required to take FACT training. We collected data from the eCountry Clearance (eCC) system as well as data provided by posts for where short-term TDY personnel were present on March 31, 2013. Through multiple data analysis steps, we identified problems with the reliability of agency and employment type data contained in the eCC system. For example, one eCC record listed an individual’s employer incorrectly; according to a State official, the individual identified as a State employee in the eCC system was actually a commercial contractor working for USAID. We ultimately concluded that these data were not sufficient or reliable for the purpose of our report. To assess the reliability of the FACT training completion data from STMS, we asked State and USAID to provide additional documentation of FACT training completion. We obtained training records, such as rosters, for those personnel who State and USAID officials told us had completed FACT training but who did not have FACT training completion dates in STMS. We asked State’s Bureau of Diplomatic Security to explain discrepancies between these records and the STMS data; however, the bureau did not provide clear explanations. To test the reliability of the dates in STMS, we tested a nonrepresentative sample of 27 personnel from a list of State and USAID assigned and short-term TDY personnel who had FACT training completion dates within the last 5 years in STMS. We compared these dates with dates in training rosters that we obtained from State’s Bureau of Diplomatic Security for each of the personnel, and we confirmed that all 27 personnel could be accounted for in the training rosters. We determined that the FACT training completion data from STMS were sufficiently reliable for the purposes of our report. To assess State and USAID’s management oversight of assigned and short-term TDY personnel’s compliance with the FACT training requirement, we reviewed relevant documents from State and USAID, including State’s Foreign Affairs Manual, USAID’s Automated Directives System, and other standard operating procedures. We also interviewed knowledgeable State and USAID officials, as well as State Orientation and In-Processing Center contractors, to determine the steps that these agencies take to ensure compliance with the FACT training requirement. We compared these steps with State and USAID management and internal controls guidance. We also assessed the agencies’ processes for ensuring compliance against selected standards for enhancing internal controls in Standards for Internal Control in the Federal Government. We conducted this performance audit from March 2013 to March 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix provides examples of attacks against U.S. personnel and facilities overseas from September 2012 through September 2013. We derived this information from a chronology developed by the Department of State’s Bureau of Diplomatic Security, Office of Intelligence and Threat Analysis. September 3, 2012 – Peshawar, Pakistan: A U.S. consulate three- vehicle motorcade was attacked by a suicide vehicle-borne improvised explosive device in the vicinity of the U.S. Consulate General’s University Town housing complex. Two U.S. consulate general officers were wounded, as were two locally employed drivers, a local police bodyguard assigned to the Consulate General, and several other policemen providing security for the motorcade. September 8, 2012 – Zabul Province, Afghanistan: The Zabul Provincial Reconstruction Team was hit with two improvised explosive devices. No chief-of-mission personnel were injured. September 10, 2012 – Baghdad, Iraq: A U.S. embassy aircraft reported seeing 7 to 10 tracer rounds of unknown caliber fired behind the aircraft. There were no injuries to chief-of-mission personnel or Office of Security Cooperation in Iraq personnel, and no property damage. September 11, 2012 – Jerusalem, Israel: An Israeli “flash-bang” distraction device was thrown at the front door of an official U.S. consulate general residence. The detonation caused damage to an exterior door and hallway, but no one was hurt in the attack. September 11 to 12, 2012 – Benghazi, Libya: A series of attacks involving arson, small-arms and machine-gun fire, rocket-propelled grenades, and mortars were directed at the U.S. Mission in Benghazi and a Mission annex, as well as against U.S. personnel en route between both facilities. Four U.S. government personnel, including the U.S. Ambassador to Libya, were killed. In addition, the attacks severely wounded two U.S. personnel and three Libyan contract guards and resulted in the destruction and abandonment of both facilities. September 11 to 15, 2012 – Cairo, Egypt: Protesters overran the U.S. embassy perimeter defenses and entered the embassy compound. Though the embassy was cleared of intruders by that evening, battles continued between police and the crowd until the morning of September 15, when Central Security Forces cleared the area of protesters. No Americans were injured in the violent demonstrations. September 12, 2012 – Tunis, Tunisia: Approximately 200 demonstrators gathered at the U.S. embassy to protest inflammatory material posted on the Internet. At one point, demonstrators tried to get to the embassy perimeter wall and threw stones at the fence. The police responded immediately and secured the area. No U.S. citizens were injured. September 13, 2012 – Sana’a, Yemen: Approximately 500 protesters pushed past security forces and stormed the U.S. embassy compound, where they caused extensive damage by looting and setting several fires. According to State officials, the physical defensive features of the buildings performed fully as designed by successfully preventing intrusion, despite repeated attempts by protesters to break through them. No U.S. citizens were injured in the attack. Throughout the day, groups of protesters continued to harass a number of chief-of-mission personnel. September 14, 2012 – Chennai, India: Several hundred protesters threw rocks and other material near the U.S. Consulate General, to protest inflammatory material posted on the Internet. At one point, a Molotov cocktail was thrown over the consulate wall, causing damage but no injuries. September 14, 2012 – Khartoum, Sudan: A mob of 4,000 protesters ransacked the German and British embassies and stormed the U.S. embassy. During the several-hour siege, the U.S. embassy compound sustained extensive damage. According to State officials, the physical defensive features of the buildings performed fully as designed by successfully preventing intrusion, despite repeated attempts by protesters to break through them. The rioters captured a police truck and set it on fire, then used the vehicle as a make-shift battering ram in an unsuccessful attempt to breach one of the compound’s rear entrance doors. Police equipment, including tear gas, was seized from the truck for use against the embassy’s defenders and intruders cut the embassy’s local power supply. More than 20 windows were damaged by rocks, and several surveillance cameras were destroyed September 14, 2012 – Khartoum, Sudan: Police apprehended a man attempting to throw a Molotov cocktail at the U.S. embassy. The embassy building was not damaged and no one was injured. September 14, 2012 – Tunis, Tunisia: Thousands of protesters breached the U.S. embassy wall and caused significant damage to the motor pool, outlying buildings, and the chancery. Separately, unknown assailants destroyed the interior of the American Cooperative School in Tunis. No U.S. citizens were injured in either attack. September 15, 2012 – Sydney, Australia: Violent protesters conducted large demonstrations near the U.S. consulate general. September 16, 2012 – Karachi, Pakistan: Two hundred protesters affiliated with a Shi’a religious group named Majilis-e-wahdat-ul- Muslimeen broke through police lines and threw rocks into the U.S. consulate perimeter, causing damage to the compound access control windows. No chief-of-mission personnel were injured, but two protesters were killed, and several more were injured as security forces responded. September 17, 2012 – Sydney, Australia: A U.S. citizen who was employed by the U.S. Air Force was assaulted while waiting for a bus in the central business district. This incident took place on the heels of the violent protests near the U.S. Consulate General in Sydney on September 15. September 17, 2012 – Jakarta, Indonesia: Demonstrators threw Molotov cocktails and other material at the U.S. embassy to protest inflammatory material posted on the Internet. Eleven police officers were hurt. No U.S. citizens were injured, and damage to the embassy was minor. September 18, 2012 – Beijing, China: Anti-Japanese protesters walked to the U.S. embassy from the Japanese embassy and surrounded the U.S. Ambassador’s vehicle. No injuries were reported, and there was only minor cosmetic damage to the vehicle. September 18, 2012 – Peshawar, Pakistan: The student wing of Jamaat-e-Islami staged a demonstration at the U.S. consulate. The crowd became violent, throwing rocks and Molotov cocktails and pulling down a billboard showing an American flag. September 27, 2012 – Kolkata, India: Fifteen thousand to 20,000 protesters marched toward the American Center and then rushed the gates, throwing sticks and stones at the facility. There was minor damage to a window. October 1 and 4, 2012 – Kandahar Province, Afghanistan: The Kandahar Provincial Reconstruction Team received small-arms fire. No personnel were injured. October 13, 2012 – Maruf District, Kandahar Province, Afghanistan: As a delegation of U.S. and Afghan officials arrived for a meeting in Maruf District, a suicide bomber detonated a suicide vest. The explosion killed two U.S. citizens and five Afghan officials. One of the U.S. citizens killed was under chief-of-mission authority. October 29, 2012 – Tunis, Tunisia: A U.S. military officer assigned to the U.S. embassy was verbally harassed by two men in a car while he was stopped at a traffic light. One of the men threw a can at the officer’s car. He was not hurt in the incident. November 4, 2012 – Farah, Afghanistan: A grenade exploded at U.S. Provincial Reconstruction Team Farah. No chief-of-mission personnel were injured in the attack. November 18, 2012 – Peshawar, Pakistan: Two mortars impacted in the vicinity of the U.S. consulate housing cluster in University Town, with one round hitting the offices of the International Medical Committee, a nongovernmental organization. The Consul General’s residence sustained shrapnel damage and one local guard was slightly injured. November 21, 2012 – Jakarta, Indonesia: A group of 150 demonstrators, protesting inflammatory material posted on the Internet, staged a demonstration in which they threw objects at the U.S. embassy. November 23, 2012 – Medan, Indonesia: Approximately 100 to 120 protesters from the Islamic Defender’s Front arrived at the American Presence Post to protest events in Gaza. The protesters became aggressive and damaged a vehicle gate in an attempt to gain access to the ground floor of the building. November 23, 2012 – Peshawar, Pakistan: A single round of indirect fire impacted a non-U.S. government private residence adjacent to a U.S. consulate residence in the University Town housing cluster. The device did not detonate, no chief-of-mission personnel were injured, and no facilities were damaged. December 4, 2012 – Dhaka, Bangladesh: A U. S. embassy vehicle carrying an embassy driver and police assigned to the embassy was surrounded by protesters on Airport Road. The demonstrators threw rocks and bricks at the vehicle, shattering several windows, injuring the driver, and forcing him off the road; the crowd then attempted to set the vehicle on fire. December 22, 2012 – Tunis, Tunisia: While U.S. government investigators were visiting the Tunisian Ministry of Justice, protesters forced their way into the building to confront the team. No one was hurt. Photos of the team, taken while they were inside the Ministry of Justice, were later posted on multiple social media and other Internet sites. January 25, 2013 – Cairo, Egypt: Ten men climbed the gate of the U.S. embassy motor pool, destroyed the facility, and stole U.S. government property. January 25, 2013 – Manila, Philippines: A crowd at the U.S. embassy protested against the Visiting Forces Agreement. They threw paint on the U.S. embassy facade. January 28, 2013 – Manila, Philippines: Protesters gathered at the U.S. embassy’s consular entrance to demonstrate against the grounding of the USS Guardian. They threw paint on the facade and defaced the embassy seal. February 1, 2013 – Ankara, Turkey: A suicide bomber detonated inside the pedestrian entrance to the U.S. embassy, killing himself and a local guard. The building’s facade sustained substantial damage. The Turkish leftwing group Revolutionary People’s Liberation Party/Front claimed responsibility for the attack. February 27, 2013 – Helmand Province, Afghanistan: Six rounds of indirect fire impacted northeast of Camp Bastion/Leatherneck. At the time of the attack, a U.S. embassy aircraft was on the ground. No one was hurt in the attack. March 21, 2013 – Baghdad, Iraq: Three rockets were directed at the U.S. diplomatic support center in Baghdad. There were no injuries and minimal damage. April 6, 2013 – Baghdad, Iraq: Two rockets were fired at the U.S. diplomatic support center in Baghdad. There were no injuries or damage to the center. April 6, 2013 – Qalat City, Zabul Province, Afghanistan: A suicide vehicle-borne improvised explosive device and a separate improvised explosive device targeted a provincial reconstruction team movement. The explosion killed a U.S. embassy officer, a U.S. Defense Department- contracted interpreter and three U.S. military personnel. Two other Department of State personnel, along with eight members of the U.S. military, and four Afghan civilians were wounded in the blast. The group was en route to a boys’ school to hand out books to Afghan students. April 10, 2013 – Baghdad, Iraq: Two rockets were fired at Baghdad International Airport, impacting near the U.S. diplomatic support center compound. There were no injuries or damage to the compound. June 23, 2013 – Baghdad, Iraq: Two rounds of indirect fire were tracked in the vicinity of the international zone, with one confirmed round impacting near the embassy’s Military Attaché and Security Assistance Annex. June 29, 2013 – Baghdad, Iraq: Three rounds of indirect fire were tracked to the international zone, with two rounds impacting approximately 200 meters north of the U.S. embassy heliport and the third round reportedly impacting100 meters north of the Embassy Annex Prosperity. August 10, 2013 – Baghdad, Iraq: Two rounds of indirect fire originating north of the international zone were reported to have landed within the international zone approximately 1 kilometer away from the U.S. embassy. August 13, 2013 – Baghdad, Iraq: Three rounds of indirect fire originating north of the international zone reportedly landed within the international zone, with one round reportedly landing as close as 0.6 kilometers north of the U.S. embassy. September 13, 2013 – Herat, Afghanistan: Taliban-affiliated insurgents conducted a complex attack against U.S. Consulate Herat using truck- and vehicle-borne improvised explosive devices and seven insurgents equipped with small arms, rocket-propelled grenades, and suicide vests. The insurgents detonated a truck-borne improvised explosive device outside the consulate’s entry control point followed by a second vehicle- borne improvised explosive device, after which the seven insurgents engaged U.S. and Afghan security personnel in a sustained firefight. According to State officials, the consulate’s internal defense team neutralized all attackers at the outer perimeter. No U.S. personnel were killed or injured in the attack. The attack lasted approximately 90 minutes and resulted in the death of eight Afghan guard force members and injury to two additional third-country National Guard force members. This appendix provides information on the extent to which State and USAID assigned personnel were in compliance with the FACT training requirement as of March 31, 2013, and the reasons for noncompliance. Table 1 shows the number of State assigned personnel we identified as required to complete FACT training by country, and the number of those personnel who complied with the requirement. Table 2 shows the countries where State assigned personnel whom we identified as required to complete FACT training that were not in compliance with the FACT training requirement, the numbers of noncompliant personnel, and State’s explanations of the noncompliance. As table 3 shows, we found no instances of noncompliance with the FACT training requirement among USAID personnel on assignments at five designated posts on March 31, 2013. At that time, no USAID personnel were on assignments to relevant posts in the three other designated high-threat countries. The following are GAO’s comments on the U.S. Departments of State’s letter dated February 18, 2014. 1. The numbers in our report reflect a compliance rate of about 95 percent for State Department assigned personnel. In addition to the contact named above, Judith A. McCloskey (Assistant Director), Jaime Allentuck, Emily Gupta, Jeffrey Isaacs, Farhanaz Kermalli, Lina Khan, and Mona Sehgal made key contributions to this report. Ashley Alley, Emily Christoff, Etana Finkler, Justin Fisher, Reid Lowe, Ruben Montes de Oca, and Steven Putansu provided additional support. | U.S personnel engaged in efforts overseas have faced numerous threats to their security. To mitigate these threats and prepare U.S. personnel for work in high-threat environments, State established a mandatory requirement that specified U.S. executive branch personnel under chief-of-mission authority and on assignments or short-term TDY complete FACT security training before arrival in a high-threat environment. This report examines (1) State and USAID personnel's compliance with the FACT training requirement and (2) State's and USAID's oversight of their personnel's compliance. GAO reviewed agencies' policy guidance; analyzed State and USAID personnel data from March 2013 and training data for 2008 through 2013; reviewed agency documents; and interviewed agency officials in Washington, D.C., and at various overseas locations. This public version of a February 2014 sensitive report excludes information that State has deemed sensitive. Using data from multiple sources, GAO determined that 675 of 708 Department of State (State) personnel and all 143 U.S. Agency for International Development (USAID) personnel on assignments longer than 6 months (assigned personnel) in the designated high-threat countries on March 31, 2013, were in compliance with the Foreign Affairs Counter Threat (FACT) training requirement. GAO found that the remaining 33 State assigned personnel on such assignments had not complied with the mandatory requirement. For State and USAID personnel on temporary duty of 6 months or less (short-term TDY personnel), GAO was unable to assess compliance because of gaps in State's data. State does not systematically maintain data on the universe of U.S. personnel on short-term TDY status to designated high-threat countries who were required to complete FACT training. This is because State lacks a mechanism for identifying those who are subject to the training requirement. These data gaps prevent State or an independent reviewer from assessing compliance with the FACT training requirement among short-term TDY personnel. According to Standards for Internal Control in the Federal Government , program managers need operating information to determine whether they are meeting compliance requirements. State's guidance and management oversight of personnel's compliance with the FACT training requirement have weaknesses that limit State's ability to ensure that personnel are prepared for service in designated high-threat countries. These weaknesses include the following: State's policy and guidance related to FACT training—including its Foreign Affairs Manual , eCountry Clearance instructions for short-term TDY personnel, and guidance on the required frequency of FACT training—are outdated, inconsistent, or unclear. For example, although State informed other agencies of June 2013 policy changes to the FACT training requirement, State had not yet updated its Foreign Affairs Manual to reflect those changes as of January 2014. The changes included an increase in the number of high-threat countries requiring FACT training from 9 to 18. State and USAID do not consistently verify that U.S. personnel complete FACT training before arriving in designated high-threat countries. For example, State does not verify compliance for 4 of the 9 countries for which it required FACT training before June 2013. State does not monitor or evaluate overall levels of compliance with the FACT training requirement. State's Foreign Affairs Manual notes that it is the responsibility of employees to ensure their own compliance with the FACT training requirement. However, the manual and Standards for Internal Control in the Federal Government also note that management is responsible for putting in place adequate controls to help ensure that agency directives are carried out. The gaps in State oversight may increase the risk that personnel assigned to high-threat countries do not complete FACT training, potentially placing their own and others' safety in jeopardy. GAO is making several recommen-dations to improve oversight of compliance with the FACT training requirement. These include identifying a mechanism to readily determine the universe of U.S. personnel subject to the requirement, updating State's policy manual to reflect changes made to the requirement in June 2013, consistently verifying that all U.S. civilian personnel have completed FACT training before arriving in designated high-threat countries, and monitoring compliance with the requirement. State concurred with the recommendations and stated that it will take steps to address them. USAID did not specifically agree or disagree but noted it plans to take additional steps. |
Under the Jones Act, Puerto Rico is part of the United States for purposes of acquiring citizenship of the United States by place of birth. Thus, a person born in Puerto Rico is typically considered a U.S. person for U.S. tax purposes and thus is subject to the U.S. Internal Revenue Code (IRC). However, IRC has different tax rules for residents of Puerto Rico than it does for residents of the United States. Section 933 of IRC provides that income derived from sources within Puerto Rico by an individual who is a resident of Puerto Rico generally will be excluded from gross income and exempt from U.S. taxation, even if such resident is a U.S. citizen. Section 933 does not exempt residents of Puerto Rico from paying federal taxes on U.S. source income and foreign source income. Nor does section 933 affect the federal payroll taxes that residents of Puerto Rico pay. Federal employment taxes for social security, medicare, and unemployment insurance apply to residents of Puerto Rico on the same basis and for the same sources of income that they are applied to all other U.S. residents. Puerto Rico has had authority to enact its own income tax system since 1918. The current individual income tax system of Puerto Rico is broadly similar to the U.S. individual income tax system. The Puerto Rican and the U.S. corporate income tax rules have many similarities and some differences. The structure of Puerto Rico’s income tax system is discussed in appendixes II and III. The current Puerto Rican income tax system is a significant source of revenue for the Puerto Rican government. In fiscal year 1992, individual and corporate income taxes totaled about 40 percent of Puerto Rico’s total revenues, with transfers from the federal government accounting for about 30 percent of revenue and other taxes, such as excise taxes, generating about 18 percent. The balance of the Commonwealth’s revenues came mainly from nontax sources. For fiscal year 1992, the Puerto Rico Treasury collected about $1.1 billion in individual income taxes and about $1.02 billion in corporate income taxes. About 42 percent of the corporate tax (about $426 million) was paid by U.S. subsidiaries covered by the possessions tax credit. The remaining 58 percent (about $594 million) was paid by corporations not covered by the credit. In addition, about $10 billion of income earned by corporations in Puerto Rico was exempted from the local corporate income tax as a result of Puerto Rico’s industrial tax incentive legislation. Currently IRC has special income tax provisions that extend tax benefits to Puerto Rico that are not available to the states. The United States exempts from income taxation—at the federal, state, and local levels—all bonds issued by the Government of Puerto Rico. Corporations organized in Puerto Rico are generally treated as foreign corporations for U.S. income tax purposes. Like other foreign corporations, they are taxed on their U.S. source income, but their Puerto Rico source income is not subject to U.S. tax. Foreign corporations pay U.S. tax at two rates—a flat 30-percent tax is withheld on certain forms of income not effectively connected with the conduct of a trade or business within the United States, and tax at progressive rates is imposed on income that is effectively connected with a U.S. trade or business. Much interest income is exempt from the withholding tax. Also, IRC’s possessions tax credit effectively exempts from federal taxation a portion of the income qualified subsidiaries of U.S. corporations (corporations organized in any state of the United States) earn in the possessions. Tax rules related to possessions source income are discussed in more detail in appendix III. As of July 1995, 651,201 individual income tax returns for tax year 1992 had been filed with the Government of Puerto Rico. Some of the individuals filing those returns paid federal income tax because they had income from sources within the United States. However, due to section 933 of IRC, which excludes Puerto Rico source income from federal taxation, the vast majority of Puerto Rican taxpayers were not subject to the federal income tax. If current federal tax rules were applied to residents of Puerto Rico in the same manner as they are applied to residents of the 50 states, and if the income and demographic characteristics of Puerto Rican taxpayers were the same as they reported on their 1992 tax returns, we estimate that the 651,201 filers would have owed about $623 million in federal income tax before taking EITC into account. The aggregate amount of EITC earned by these taxpayers would have been about $574 million, thus the aggregate net federal tax liability would have been about $49 million (see table 1). We estimate that 384,107 filers, or about 59 percent of the total number, would have earned some EITC. The average amount of EITC earned by the 384,107 filers would have been about $1,494. The median EITC would have been about $1,623 (see table 2). Our estimates indicate that, before taking the federal child and dependent care tax credit (DCTC) into account, about 41 percent of the 651,201 households that filed Puerto Rican income tax returns in 1992 would have had positive federal income tax liabilities, about 53 percent would have received net transfers from the federal government because their EITC would have more than offset their precredit liabilities, and the remaining 6 percent would have had no federal tax liability. The lack of adequate information on the child and dependent care expenses of Puerto Rican taxpayers made it impossible for us to estimate the amount of DCTC that each taxpayer in our Puerto Rico database would have earned. The nonrefundable DCTC could only have reduced the number of households having positive tax liabilities and increased the numbers with zero liabilities or net transfers. However, it seems unlikely that the DCTC would have caused a large number of taxpayers to shift from one status to another because our estimates indicate that the average credit earned by those claiming the credit would likely be less than $500. Taxpayers would only move from having a positive tax liability to having a zero tax liability, or receiving a net transfer, if they claimed the credit and if their precredit tax liability were less than the amount of credit claimed. If the federal income tax had been fully extended to residents of Puerto Rico in 1992, it seems likely that additional individuals and married couples who had not filed Puerto Rican tax returns would have filed federal tax returns in order to take advantage of EITC. Individuals with AGIs less than or equal to $3,300 and married couples with AGIs less than or equal to $6,000 were not required to file Commonwealth tax returns in 1992. However, some of these individuals filed in order to claim refunds of taxes that had been withheld on their wages, dividends, or interest. Others did not file, for example, because they were not subject to withholding taxes on their wages or salaries, as is the case for domestic workers and farm laborers, or because the amount withheld was small. We have no way of knowing with certainty how many of these residents who currently are not required to file would file in order to claim EITC. However, to derive an estimate of what this number might be, we considered the number of people who had income levels below the income tax threshold and also were exempt from withholding as an upper limit of the number of these potential additional filers. If all of those residents claimed EITC, we estimated that they would have qualified for about $64 million. If the additional EITC that could have been claimed by nonfiler residents were about $64 million, our estimate of the aggregate amount of EITC that would have been earned would have increased from about $574 million to about $638 million. This additional EITC would be sufficient to eliminate the $49 million of aggregate net federal income tax liability that we estimated would exist for the population that did file. Our estimates do not reflect other potential behavioral responses to the availability of the credit or the imposition of the federal income tax. For example, we were not able to estimate the number of potential EITC claimants who currently are not filing, even though they are legally obligated to file. For tax year 1992, Puerto Rican taxpayers reported about $1.03 billion in individual income tax. We estimated that, if current federal tax rules had been fully applied to residents of Puerto Rico and, if there were no behavioral responses to this new taxation, then the aggregate federal income tax liability of Puerto Rican taxpayers in 1992 would have been about $49 million. If the Government of Puerto Rico had wanted to keep the amount of combined federal and commonwealth individual income tax the same as it was without the imposition of full federal income tax, then it would have had to reduce the aggregate liability imposed by its own individual income tax by about 5 percent. If we allowed for the potential expansion of the filing population in response to the availability of EITC (to include residents who had no withholding), then the estimated aggregate federal income tax liability would have been essentially eliminated. In that case, the Government of Puerto Rico would not have to change its own income tax to keep the aggregate combined income tax constant. There are, however, other reasons why the Government of Puerto Rico may have adjusted its own income tax under these circumstances. In comments on a draft of this report, the Secretary of the Treasury of Puerto Rico stated that his government would adjust the island’s fiscal system to provide relief to taxpayers who would have positive federal income tax liabilities if the federal income tax were fully extended to residents of Puerto Rico. There are several ways to compare individual income tax across jurisdictions. A comparison of per-capita tax shows how much, in dollars, the average resident in each jurisdiction bears. Personal income provides a better indication of a jurisdiction’s tax capacity than does population because a person’s ability to pay taxes rises as his or her income rises. A comparison of taxes paid as a percentage of total state or commonwealth personal income shows, approximately, the relative extent to which each jurisdiction draws upon its residents’ ability to pay. When comparing individual income taxes paid, however, it is important to recognize that some jurisdictions may have relatively low individual income taxes because they rely more heavily on other revenue sources. It is also important to note that comparisons of average taxes paid across jurisdictions do not show the comparative taxes paid by specific classes of taxpayers in each jurisdiction. In per-capita terms, Puerto Rico’s individual income tax is relatively low. In 1992, the per-capita tax burden of Puerto Rico’s individual income tax was about $341. The state and local income taxes in 33 states, and the District of Columbia, were higher per capita. Moreover, since residents of Puerto Rico currently pay a relatively small amount of federal income tax, the combined federal and Commonwealth per-capita income taxes in Puerto Rico are lower than those in any of the 50 states and the District of Columbia. If residents of Puerto Rico had been fully subject to the federal income tax in the same manner as residents of the 50 states were, we estimate that the per-capita federal income tax in Puerto Rico would have been about $14 in 1992. In this case, if the Government of Puerto Rico did not adjust its own income tax in response to the imposition of the federal tax, the combined federal and Commonwealth income tax in Puerto Rico would have been about $355 per capita. This amount is about a third of the per-capita combined federal, state, and local income taxes in Mississippi, which has the lowest per-capita income taxes of any state. (See app. IV.) One reason why Puerto Rico’s per-capita income tax is relatively low is that per-capita personal income in Puerto Rico is significantly lower than that in any of the 50 states and the District of Columbia. In 1992, Puerto Rico’s per-capita personal income was $6,428, compared to $14,083 in Mississippi, the state with the lowest per-capita personal income. Puerto Rico’s individual income tax collections amounted to 5.3 percent of the Commonwealth’s personal income in 1992. This percentage is higher than that of the state and local income tax collections in any of the states and the District of Columbia. New York state, where state and local income taxes amounted to 4.2 percent of state personal income, ranked closest to Puerto Rico. (See app. IV). One reason why Puerto Rico’s income tax as a percentage of personal income is high, relative to those of the 50 states and the District of Columbia, is because Puerto Rico relies more heavily on income taxes as a source of revenue than do most of those other jurisdictions. In 1992, only two states, Maryland and Massachusetts, relied more heavily on their state and local individual income taxes than Puerto Rico did. Puerto Rico’s reliance on its corporate income tax was also much higher than that of any state or the District of Columbia. Puerto Rico does not levy a general sales tax and received only 5.8 percent of its general revenues from property taxes. In contrast, in the vast majority of states, general sales taxes and property taxes account for at least 25 percent of general revenues. (See app. IV). Despite Puerto Rico’s heavy reliance on its individual income tax, the combined federal, state, and local individual income taxes, as a percentage of personal income, were significantly lower in Puerto Rico than in any of the states or the District of Columbia because residents of Puerto Rico paid little federal income tax. If residents of Puerto Rico had been fully subject to the federal income tax in 1992, and Puerto Rico did not alter its own income tax, we estimate that the combined income taxes would have amounted to about 5.5 percent of Commonwealth personal income. Combined income taxes in Mississippi amounted to 8.2 percent of state personal income in 1992. In no other state or the District of Columbia did combined income taxes amounted to less than 9 percent of personal income. (See app. IV). Although the combined average income tax rates paid by residents of Puerto Rico would not have changed substantially, unless the Government of Puerto Rico adjusted its own income tax rate schedule, higher income residents of Puerto Rico would face substantial increases in their combined marginal income tax rates if they were fully subject to the federal income tax. These individuals would face much higher combined marginal income tax rates than similar individuals residing in any of the 50 states or the District of Columbia face. Under Puerto Rico’s current income tax law, marginal tax rates can reach as high as 38 percent over certain ranges of income. Rates for single taxpayers and married taxpayers filing joint returns in Puerto Rico reach 31 percent when taxable income is as little as $30,001. Rates for married taxpayers filing separately reach 31 percent when taxable income is as little as $15,001. In contrast, as of 1994, in no state or the District of Columbia did state and local marginal tax rates exceed 12 percent for any taxpayers at any income level. With the full imposition of the federal income tax, some residents of Puerto Rico could face combined marginal income tax rates of over 70 percent, unless the Government of Puerto Rico adjusted its own tax. Neither the Joint Committee on Taxation nor the U.S. Department of the Treasury has made public any recent estimates of the amount of revenue that would be saved if the possessions tax credit were eliminated immediately. The last publicly available revenue estimate that the Joint Committee made for an immediate repeal of the possessions tax credit, without any phase out, was in February 1993. At that time, it estimated that the repeal of the credit would increase revenues by $4.1 billion in 1996. That estimate did not reflect the significant limitations that the Omnibus Budget Reconciliation Act (OBRA) of 1993 subsequently placed on the use of the credit. Since the 1993 changes reduced the benefits provided by the credit, the February 1993 estimate was higher than it would have been if the Joint Committee had known about the changes. The U.S. Department of the Treasury also has not publicly released a revenue estimate for the immediate repeal of the credit since the OBRA 1993 changes. The Seven-Year Balanced Budget Reconciliation Act of 1995 (H.R. 2491) would have repealed the possessions tax credit after December 31, 1995, had it not been vetoed by the President. The act contained a grandfather rule that would have gradually phased out the credit for existing credit claimants over a period of up to 10 years. The Joint Committee on Taxation estimated that this phasing out of the credit would save the Treasury $255 million in 1996 and a total of $2.5 billion from 1996 through 2000. This revenue estimate is relevant only to the very specific phase-out rules contained in the act. Other phase-out schemes could have much different revenue consequences. The Joint Committee on Taxation and the Treasury Department have made “tax expenditure” estimates for the possessions tax credit as recently as September 1995 and March 1996, respectively. The latest Joint Committee estimates indicated that the tax expenditure would be $3.4 billion in 1996, growing to $4.4 billion by 2000. The Treasury Department estimated that the tax expenditure would be $2.8 billion in 1996, rising to $3.4 billion by 2000. The Joint Committee and Treasury both use a different approach for making tax expenditure estimates for specific tax preferences than they use for estimating the revenue gains that would occur if those preferences were eliminated. A revenue gain estimate reflects expected behavioral changes on the part of taxpayers in response to the elimination of a particular preference; a tax expenditure estimate, which represents the amount of tax benefit that taxpayers would receive if the preference were not repealed, does not reflect any behavioral changes. If a tax credit were eliminated, taxpayers would be likely to seek ways to avoid paying the full amount of tax that the credit had previously offset. For example, if the possessions tax credit were repealed, U.S. corporations might shift some of their investment out of Puerto Rico to operations in foreign countries, where some of the income might not be immediately subject to U.S. taxation. Due to the differences in behavioral assumptions, if either the Joint Committee or Treasury were to make both a tax expenditure estimate for a tax credit and a revenue gain estimate for the elimination of the credit, using the same set of economic forecasts and the same data, the revenue gain estimate could very well be smaller than the tax expenditure estimate. On the other hand, imprecisions in other assumptions and in the economic forecasts that the Joint Committee or Treasury uses could cause both the tax expenditure estimate and the revenue gain estimate to either overstate or understate the true amount of revenue that would flow into the treasury if the credit were eliminated. To calculate an estimate of the amount of personal income taxes the United States would collect from residents of Puerto Rico and to analyze issues related to EITC, we obtained individual income tax data from the Government of Puerto Rico. The data included selected items from each individual income tax return filed with the Department of the Treasury of Puerto Rico in 1992, the last year for which detailed information was available. The data we used were the best available. However, they were taken from an administrative database that had not been cleaned of all errors or inconsistencies. We did our own consistency checks and, with the assistance of the Department of the Treasury of Puerto Rico, corrected the significant errors we detected. Some inconsistencies remain in the data, but we determined that the data is adequate to provide general information about the magnitude of the potential revenue effect of extending full federal income taxation to the residents of Puerto Rico. We documented the structure of the Puerto Rican individual income tax system and compared it to the U.S. tax system. On the basis of the tax law summary table in appendix II and the data provided by the Commonwealth, we prepared a computer program to estimate the federal income tax that would have been paid if each Puerto Rican 1992 individual filer had filed a U.S. individual tax return according to U.S. tax rules that had been adopted as of December 31, 1995. With one exception, we used U.S. tax rules that were effective for tax year 1995. The one exception was that we used the rules governing EITC that became fully phased in for tax year 1996. We did not attempt to predict how taxpayers would respond to the new incentives and disincentives they would face under U.S. tax law. Behavioral responses of corporate taxpayers to the elimination of the possessions tax credit would be of particular importance to the aggregate amount of income earned in Puerto Rico. According to officials from the Department of the Treasury of Puerto Rico, corporations covered by the credit directly employed about 109,000 Puerto Rican residents in 1995. As we concluded in our earlier report on the possessions tax credit, reliable estimates of the impact that the elimination of the credit would have on Puerto Rico’s economy cannot be made. A second important limitation of our estimate of federal individual income tax liabilities results from deficiencies of the data available for our estimate. The Puerto Rican tax returns do not contain all of the information that we would need to accurately simulate certain aspects of the federal tax code. For example, under Puerto Rico tax rules, interest from U.S. federal securities is exempt from taxation. No information about this type of interest is reported on the return, and accordingly, we do not have the data to estimate its effect on a possible U.S. tax liability. To compare the combined income tax burden of the Commonwealth of Puerto Rico to the combined income tax burden of the 50 states and the District of Columbia, we analyzed federal, state, and local individual income taxes in per-capita terms and as a percentage of personal income using published data from the Advisory Commission on Intergovernmental Relations (ACIR), the Commonwealth of Puerto Rico, and IRS statistics of income. Further details on our methodology are contained in appendix I. As agreed with your staff, we did not produce our own estimate of the amount of revenue the U.S. Treasury could obtain by eliminating the possessions tax credit. We have simply presented the Joint Committee on Taxation’s and the U.S. Treasury’s estimates of the tax expenditure for the credit. The Puerto Rico Treasury was unable to provide us with detailed data relating to corporations operating in Puerto Rico that are not covered by the possessions tax credit. There are differences between Puerto Rico’s corporate income tax and the federal corporate income tax. In the absence of detailed data relating to the incomes and deductions reported by corporations not covered by the possessions tax credit, we cannot say whether federal income taxation of these corporations would have yielded significantly more or significantly less revenue than the approximately $594 million of income tax actually collected from these corporations by Puerto Rico in 1992. Marginal tax rates for corporations are generally higher in Puerto Rico than in the United States, but Puerto Rico provides significant tax exemptions for income earned from certain designated activities. Appendix III provides a description of the principal differences in the treatment of corporate and partnership income under the Puerto Rican and federal tax codes. We did our work in Washington, D.C., between August 1995 and June 1996 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Secretary of the Treasury of the Commonwealth of Puerto Rico, from officials of the U.S. Treasury, and from the Internal Revenue Service. We discussed the draft on June 7, 1996, with responsible officials from the Office of the Assistant Secretary of the Treasury for Tax Policy. We discussed the draft on June 11, 1996, with the Secretary of the Treasury of Puerto Rico and members of his staff. The Secretary also provided us with written comments, the full text of which, excluding an attachment of technical comments, is presented in appendix V. IRS’ Office of the Associate Chief Counsel provided us with written comments relating to our descriptions of sections of IRC. Most of the comments that the various officials made brought to our attention corrected and updated information. There were also suggestions that parts of our presentation needed to be clarified. We considered their comments and modified the report where appropriate. The U.S. and Puerto Rican officials made several comments that merit special attention. First, officials from both the U.S. and Puerto Rican Departments of the Treasury pointed out that we did not address the distributional effects that a full imposition of the federal income tax would have in Puerto Rico. An official from the U.S. Treasury noted that the combined marginal income tax rates of higher income individuals in Puerto Rico would be significantly higher than the combined marginal rates on similar individuals in any of the 50 states or the District of Columbia. He suggested that the Government of Puerto Rico would be compelled to modify its own tax system to avoid these extremely high rates. The Secretary of the Treasury of Puerto Rico noted that his government would have to make significant adjustments to the island’s fiscal system to provide relief for those who would have positive federal income tax liabilities. IRS’ Associate Chief Counsel noted that U.S. persons who currently pay Puerto Rican income tax as well as federal income tax, such as U.S. military personnel stationed on the island, can claim a foreign tax credit against their federal income tax liability. If the Puerto Rican income tax were to be treated as a state income tax, these individuals would only be allowed to claim a deduction for that tax, not a credit. As a result, their U.S. income tax liabilities could increase significantly if Puerto Rico did not adjust its income tax. We agree that the full imposition of the federal income tax could have significant impacts on specific groups of taxpayers in Puerto Rico, even though the impact on aggregate federal revenue might be negligible. However, the data and our estimating methodology did not support a detailed distributional analysis. We did not mention possible policy responses by the Government of Puerto Rico because that was beyond the scope of this study. In the section of our report that compares the combined individual income taxes in Puerto Rico with those in the 50 states and the District of Columbia, we have added a comparison of the marginal tax rates for Puerto Rico’s income tax with the marginal income tax rates for other U.S. jurisdictions. The top marginal income tax rate in Puerto Rico is significantly higher than the rates in the other jurisdictions. Officials from both the U.S. and Puerto Rican Treasuries were concerned about our discussion of local corporations operating in Puerto Rico that are not covered by the possessions tax credit. The officials felt that we improperly implied that the amount of income tax revenue that the Government of Puerto Rico currently collects from these corporations indicates roughly the amount of revenue that the federal government might collect if the corporations were subject to the full federal income tax. We tried to make clear in our draft that there are differences between Puerto Rico’s corporate income tax and the federal corporate income tax and that potential federal revenues could be greater or less than the amount that the Government of Puerto Rico currently collects. In response to the comments, we moved some of the discussion of differences between the two corporate income taxes forward from an appendix to the body of the letter. Finally, the Secretary of the Treasury of Puerto Rico noted that our report does not address all of the consequences that are likely to follow from a major change in the fiscal relations between Puerto Rico and the federal government. He said that, in particular, we do not address potential changes in federal transfers to Puerto Rico. We agree that there are important considerations relating to potential changes in fiscal relations that are beyond the scope of this report. Unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this report. At that time, we will send copies of this report to the Ranking Minority Members of the House Committee on Resources, and the Subcommittee on Native Americans and Insular Affairs, and to other appropriate congressional committees. We will also send copies to the Commissioner of the IRS, Secretary of the Treasury, representatives of the government of Puerto Rico, and other interested parties. Copies will also be made available to others upon request. This work was performed under the direction of James Wozny, Assistant Director, Tax Policy and Administration Issues. Major contributors to this report are listed in appendix VI. If you have any questions please contact me on (202) 512-9044. The Chairman, House Committee on Resources, and the Chairman, House Subcommittee on Native American and Insular Affairs, Committee on Resources, requested that we provide certain data regarding the potential effects of extending federal income taxation to Puerto Rico. Specifically, they asked that we provide estimates of (1) the amount of federal income tax that individuals residing in Puerto Rico would pay if they were treated in the same manner as residents of the 50 states, the amount of earned income tax credits (EITC) Puerto Rican residents would receive, the percentage of taxpayers who would have positive federal tax liabilities, and the percentage who would earn EITC; (2) the extent to which the Government of Puerto Rico would have to reduce its own income tax if it were to keep the amount of combined income tax (both federal and Commonwealth) on individuals the same as it was without the full imposition of the federal tax; (3) how the amount of income taxes paid by the average taxpayer in Puerto Rico compares with the amount of combined federal, state, and local income taxes paid by residents in the 50 states and the District of Columbia; and (4) the amount of revenue the U.S. Treasury could obtain by the repeal of the possessions tax credit, which effectively exempts from federal taxation a portion of the income that subsidiaries of U.S. corporations earn in Puerto Rico. To calculate the amount of personal income taxes the United States would collect from residents of Puerto Rico and analyze issues related to EITC, we obtained individual income tax data from the Government of Puerto Rico. These data included selected items from each individual income tax return filed with the Department of the Treasury of Puerto Rico in 1992, the last year for which detailed information was available. The data we used were the best available. However, they were taken from an administrative database that had not been cleaned of all errors or inconsistencies. We did our own consistency checks and, with the assistance of the Department of the Treasury of Puerto Rico, corrected the significant errors we detected. Some inconsistencies remain in the data, but we determined that the data is adequate to provide general information about the magnitude of the potential revenue effect of extending full federal income taxation to the residents of Puerto Rico. To estimate the total U.S. federal income tax related to extending the federal income tax to Puerto Rico, we documented the elements that made up Puerto Rico’s taxable income, deductions, exemptions, and credits and compared them to the U.S. federal income tax. To aid that process, we have prepared a summary table tracing each line item from the U.S. 1040 return and schedule of itemized deductions to a comparable item in the 1992 Puerto Rican individual income tax return. On the basis of the tax law summary table in appendix II and the data provided by the Commonwealth, we prepared a computer program to estimate the federal income tax that would have been paid if (1) each Puerto Rican 1992 individual filer had filed a U.S. individual tax return according to U.S. tax rules that had been adopted as of the end of 1995 and (2) his or her filing behavior had not changed as a result of the imposition of U.S. income taxes. U.S. tax law was used to determine U.S. tax treatment of Puerto Rican tax return income, exemption, and deduction items. We assumed that the taxpayers took advantage of any U.S. credits or deductions that were not available under Puerto Rico law, if we had sufficient data to presume their eligibility for those credits and deductions. The estimate of U.S. federal tax liabilities that we produced in this manner differs in several important ways from an estimate of the amount of revenue that the United States would actually receive if the federal income tax were actually imposed on Puerto Rico residents for tax year 1995. We have not attempted to estimate how the extension of individual and corporate income taxes or any federal aid programs would affect the pretax incomes of Puerto Rican taxpayers. Another important limitation of our estimate results from deficiencies of the data available for our estimate. The Puerto Rican tax returns do not contain all of the information that we would need to accurately simulate certain aspects of the federal tax code. For example, under Puerto Rico’s tax rules, interest from federal securities is exempt from taxation. Also, for example, unemployment compensation is not included in Puerto Rico’s definition of gross income, whereas it is in U.S. tax law. Information about this interest and unemployment compensation is not reported on the return, and accordingly, we did not have the data to estimate its effect on a possible U.S. tax liability. The analysis in appendix II describes the extent to which we could or could not estimate amounts for each line item on the federal tax return from data on Puerto Rican returns. Finally, a study of compliance with Puerto Rico’s income tax prepared for the Puerto Rico Treasury revealed that noncompliance with Puerto Rico income tax laws is significantly more extensive than noncompliance with federal income tax laws. This study indicated that the total income gap (the amount of adjusted gross income (AGI) that went unreported) in 1991 for Puerto Rico was about $3.71 billion, or 26 percent of total income, while for the United States the income gap was about $447.1 billion, or 12 percent. Our estimates reflect the compliance behavior of Puerto Rican taxpayers in 1992. They do not take into account any change in compliance rates in Puerto Rico that have occurred since 1992 or that might occur if full federal income taxation were imposed. Since the completion of that study, the Department of the Treasury of Puerto Rico has implemented new compliance initiatives that, according to Puerto Rico Treasury officials, have increased the number of individual income tax returns filed from 651,201 in 1992 to 720,000 in 1994 and increased their collections of all taxes by about $430 million in fiscal years 1994 and 1995. EITC is a major feature of the U.S. income tax system that would significantly affect estimates of federal tax revenues obtained from Puerto Rico if the federal income tax were extended to Puerto Rico. EITC is a refundable credit that is awarded to tax filers who meet certain earned income requirements and have qualified children residing in their households. A smaller credit is awarded tax filers who have earned incomes but no qualifying children—the so-called “childless” credit. Qualification requirements for the credit are discussed in table II.2. Because the credit is targeted to tax filers with relatively low earned incomes, a tax filing population with a high proportion of low-income earners, such as Puerto Rico’s, would be entitled to a substantial amount of EITC in the aggregate. Our EITC simulation methodology relied on available information contained in Puerto Rican tax returns for 1992 to estimate proxies for earned income, unearned income, AGI, and qualifying children, as defined under federal tax law. We restated all dollar values, such as income thresholds and maximum credits, contained in the EITC computation rules as 1992 dollars. We then applied the restated rules to the estimated proxies in order to compute an EITC for each Puerto Rican taxfiler in 1992 that met the necessary conditions. A limitation of the simulation described above, apart from the necessity to approximate the value of certain tax elements, is the risk of significantly undercounting the potential EITC-qualified population of tax filers. The 1992 Puerto Rican tax filing population may omit potential filers either because their incomes fell below the filing threshold for the Puerto Rico income tax or because they evaded their filing obligations in 1992.Because the number of these potential filers may be substantial at the lower earned income levels, and thus cause our simulated estimate of EITC to be understated, we examined Census data in an attempt to estimate the number of nonfilers that would file if EITC were available. The decennial 1990 Census of Puerto Rico contains information on the incomes and family composition of households during the sample period 1989. From the family relationships contained on the Census file we constructed a data file of simulated tax filers, e.g., single, head-of-household, and married joint returns. Information about the age and incomes of nonfiling family members was used to estimate the number of EITC-qualified children. Income elements, although not complete for computing taxable incomes, seemed reasonably adequate for estimating approximations of AGI and earned income. From the simulated tax filing data set, we estimated the number of potential filers who would qualify for EITC by AGI classes. These counts of potential filers were compared to the count of simulated EITC filers obtained from the 1992 Puerto Rican tax return file. As expected, the number of potential filers in the Census data set in low-AGI groups, roughly those AGIs below tax filing thresholds, exceeded the number from the tax file data set. Many of the simulated filers from the Census data set, in these income groups, could be agricultural workers or domestic service workers who are exempt from tax withholding and thus need not file tax returns. However, in the higher AGI classes, the number of simulated EITC tax filers from the Census data set was lower than the number of simulated EITC filers from the 1992 tax return data set. This result is not plausible because the number of potential EITC recipients in the full Puerto Rican population cannot be lower than the number of potential recipients in the tax filing population. We have more confidence in our simulations based on the tax return data than those based on the Census data. The translation of Puerto Rican filing units into federal filing units is relatively straightforward from the tax data, although there is considerable uncertainty as to how households in the Census database should be translated into filing units. In addition, income amounts reported on the Census survey may differ from the amounts that the same individuals would report for tax purposes. For these reasons we concluded that we could not use the Census data to estimate the total number of nonfilers who might claim EITC if it became available to them.However, as explained in the letter, we did make an upper-bound estimate for the amount of EITC that might be claimed by taxpayers who had legitimate reasons for not filing tax returns in 1992. Potential noncompliance with the EITC provisions and behavioral responses to the availability of the credit could result in a larger aggregate amount of EITC being earned than we have estimated. A previous GAO report and studies by IRS have raised concerns regarding the vulnerability of EITC to noncompliance including fraud. Also, the introduction of the earned income credit could induce some welfare recipients to forego welfare and obtain employment in order to claim the tax credit. We did not adjust our estimate for these factors because there was insufficient information available to quantify their effect on EITC. Differences between U.S. and Puerto Rican tax rules relating to child and dependent care expenses made it impossible for us to estimate the amount of federal child and dependent care tax credit (DCTC) that each taxpayer in our Puerto Rico database would earn. The federal credit, which is nonrefundable, is equal to a percentage of the expenses that a taxpayer pays for child or dependent care in order to be able to obtain gainful employment. The maximum credit for taxpayers with AGIs of $10,000 or less is $1,440 for two or more dependents, and $720 for one dependent. The maximum credit for taxpayers with AGIs over $28,000 is $960 for two or more dependents, and $480 for one dependent. Puerto Rico allows an itemized deduction for child-care expenses but not for expenses to care for other dependents. The maximum deduction is $800 for two or more children, and $400 for one child. A large majority of Puerto Rican taxpayers do not itemize, so we were unable to determine whether they had any expenses for child care. In the absence of complete information on the child and dependent care expenses of Puerto Rican taxpayers, we had to rely upon the experience of U.S. taxpayers as a basis for estimating the aggregate amount of federal DCTC that Puerto Rican taxpayers might claim. Using a sample of individual tax returns compiled by IRS for tax year 1991, the latest data available, we classified U.S. returns by nine AGI categories and by the number of children claimed as qualifying for the DCTC. We classified Puerto Rican returns according to estimated U.S. AGI and the number of children claimed as dependents. We computed an average credit amount per U.S. return for each class of return. We assumed that the average credit per Puerto Rican return in a given class would be the same as the average credit for the comparable class of U.S. returns Thus, we multiplied the number of Puerto Rican returns in each class by the appropriate U.S. average credit to obtain the amount of credit earned by each class of Puerto Rican returns. We obtained our overall estimate of about $15 million by summing the estimates for the individual classes. We were unable to allocate the aggregate amount of DCTC across individual taxpayers. Consequently, we do not know precisely how many taxpayers might have had their federal tax liabilities completely offset by this credit. For this reason, we could not estimate precisely the number of Puerto Rican taxpayers who would have had positive federal tax liabilities. Determining the magnitude of the income tax reductions the Government of Puerto Rico would have to make in order to maintain the same level of combined income tax paid by individuals resident in Puerto Rico if they were subject to the federal income tax was a two-step process. First, we determined the total amount of 1992 Puerto Rican tax from the income tax return data provided by the Commonwealth. Then, we compared this amount to the total estimated potential U.S. tax liability as calculated in the first objective. To compare the combined federal and Puerto Rican income tax to the combined federal, state, and local income tax of the 50 states and the District of Columbia, we analyzed federal, state, and local individual income taxes in per-capita terms and as a percentage of personal income. In addition, to understand the results of our analysis of Puerto Rico’s income tax, we analyzed the general revenue sources of Puerto Rico and the states. We used published data from the Advisory Commission on Intergovernmental Relations (ACIR), IRS’ Statistics of Income Bulletin, and Puerto Rico’s Informe Económico al Gobernador, an annual report to the Governor on the economy of the Commonwealth. Generally, ACIR based its calculations on state and local general revenue data collected by the Bureau of the Census. We followed the Census Bureau’s Classification Manual definitions of government and finance data. The following tables summarize our comparison of United States and Puerto Rican individual income tax rules relevant to our simulation for each item on the U.S. individual income tax return. These tables provide comments on issues related to the conversion of the Puerto Rican income tax return items to the U.S. individual income tax return items. Our conversion is based on 1992 Puerto Rican income tax rules because that was the latest year for which return information necessary for our simulation was available on computer tape. Since 1992, the Puerto Rican tax system has been changed. In October 1994, Puerto Rico enacted tax reform legislation that according to the government of Puerto Rico, was intended to achieve several objectives. These objectives include (1) establishing a more equitable tax structure, (2) encouraging equal and consistent application of tax laws, and (3) simplifying the tax structure. Generally, the 1994 tax reform lowered individual tax rates and corporate tax rates. Effective for tax years commencing after June 30, 1995, the act lowered all statutory individual income tax rates and increased the level of taxable income subject to the maximum tax rate, from $30,000 to $50,000 for married filing jointly. Tax rates were lowered from 1 to 7 percentage points, depending on the tax bracket and filing status of the taxpayer. We noted some of the significant provisions of the Puerto Rico Tax Reform Act of 1994 in table II.1. Table II.1: Conversion of Puerto Rican 1992 Individual Income Tax Return Items to U.S. 1995 Individual Income Tax Return Items Issues/comments on conversion to U.S. return U.S. reporting: The United States has four filing statuses: single, married filing jointly or surviving spouse, married filing separately, and head-of-household. Puerto Rico reporting: Puerto Rico has five filing statuses: married and living with spouse, head-of-household, married not living with spouse, single, and married filing separately. Conversion to the U.S. return: The United States does not have a married not living with spouse status. For the married not living with spouse status, taxpayers would have to file as married filing jointly or married filing separately status. For the married not living with spouse status, we classified returns filed under this status as head-of-household filing status if the Puerto Rican return reports a dependent child, since that status has more favorable tax rates. If the return did not report a dependent child, then the return was classified as single. (continued) Issues/comments on conversion to U.S. return U.S. reporting: The United States allows a deduction amount based on the number of exemptions claimed. Exemptions can be claimed for the taxpayer, the spouse, and the dependents. Puerto Rico reporting: Puerto Rico allows a deduction amount based on the number of personal exemptions and dependents claimed. However, Puerto Rico allows only one personal exemption for married taxpayers and does not allow a head-of-household taxpayer an exemption for the dependent that qualifies him or her as head-of-household. Conversion to the U.S. return: The number of exemptions was included in the U.S. return as reported on the Puerto Rican return except that married filing jointly taxpayers were considered two exemptions instead of one, and head-of-household taxpayers had an additional dependent added. U.S. reporting: This includes all compensation for personal services as an employee unless specifically excluded. Puerto Rico reporting: Incudes all amounts paid to employees that constitute compensation. Conversion to the U.S. return: Wages, salaries, and tips on the Puerto Rican return were used as reported. U.S. reporting: All interest is taxable, except for interest on certain state and local bonds and certain other exceptions. Puerto Rico reporting: Income from federal, state, and local government bonds is exempt in Puerto Rico and is not reported. Also the first $2,000 of interest income from Puerto Rican banking institutions is exempt. However, the exempt amount is reported on the income tax return, but it is excluded from gross income. Conversion to the U.S. return: Only those interest earnings reported on the Puerto Rican income tax return were included in our simulation. Interest earnings included the exempt amount for interest in Puerto Rican banking institutions but not interest from federal, state, and local government bonds because it was not reported on the Puerto Rican return. The amount of dividend income was included in the U.S. return as reported on the Puerto Rican return. U.S. reporting: This item is an accounting entry in the federal income tax return used only by taxpayers who, during the tax year, received a refund, credit, or offset of state or local income taxes that they paid and deducted in any prior year. Puerto Rico reporting: There is no equivalent line item on the Puerto Rican tax return. Conversion to the U.S. return: This entry was not necessary for our simulation because no prior year deductions would have been made. The amount for alimony received was included in the U.S. return as reported on the Puerto Rican return. (continued) Issues/comments on conversion to U.S. return U.S. reporting: Sole proprietor income after related expenses is included on the U.S. individual income tax return with certain limits. However, U.S. passive losses generally can only be deducted against passive income. Related expenses include those that are ordinary and necessary such as depreciation. The U.S. tax rules allow straight line and some accelerated depreciation. The United States also allows an immediate write-off of business assets up to $17,500. This amount is reduced if the total cost of the property placed in service during the year exceeds $200,000. Puerto Rico reporting: Sole proprietor income after related expenses is also included on the Puerto Rican return. Puerto Rico also limits the extent to which business losses can offset salary income. Deductions rules, like depreciation, may differ. For example, in 1992, Puerto Rico allowed certain taxpayers to use “flexible depreciation.” This depreciation method allows a depreciation deduction up to the full cost of the asset in the year it is first used. However, the deduction was not to exceed the net benefit of the business or commercial activity in which the property was used. This flexible depreciation method was repealed in the Tax Reform Act of 1994 for assets acquired after June 30, 1995. Conversion to the U.S. return: The amount reported on the Puerto Rican return was used as reported. U.S. reporting: Net capital gains are fully included in income with an alternative 28-percent tax rate for long-term gains net of long- and short-term losses. Capital losses are deductible to the extent of capital gains; up to a $3,000 loss is allowed against other income. Capital losses can be carried forward and deducted in succeeding years. Long-term capital gain or loss means gain or loss from the sale or exchange of a capital asset held for more than 1 year. Puerto Rico reporting: Gains are fully taxable; capital losses are limited to capital gains plus net income or $1,000, whichever is lower, with the excess losses carried forward for 5 years. Also, there is an alternative tax on net long-term capital gains, which is either the regular tax or a “special 20-percent tax on capital gains,” whichever is more advantageous to the taxpayer. Long-term capital gain or loss means gain or loss from the sale or exchange of a capital asset held for more than 6 months. Puerto Rico also has sale or exchange of principal residence rules that are somewhat similar to those of the United States. In general, if the Puerto Rican taxpayer buys another residence within 1 year before or 1 year after the sale of the old residence (18 months after sale is allowed if a new residence is constructed), the gain is not recognized to the extent the selling price does not exceed the cost of the new residence. A one-time exclusion of $50,000 is provided for taxpayers 60 years old or older at the time of the sale, if the taxpayer lived in the old residence for at least 3 years of the last 5 years prior to the sale. Conversion to the U.S. return: The amount of capital gains and losses was included in the U.S. return as reported on the Puerto Rican return. (continued) Issues/comments on conversion to U.S. return U.S. reporting: This line item is used for gains and losses reported on U.S. Form 4797. Generally, this form is used to report sales or exchanges and involuntary conversions from other than casualty or theft of property used in a trade or business; disposition of noncapital assets other than inventory or property held primarily for sale to customers; and recapture of IRC section 179 expense deductions for partners and S-corporation shareholders. Business real estate and any depreciable property is excluded from the definition of capital asset. However, if the business property qualifies as IRC section 1231 property, capital gain treatment may apply. Under IRC section 1231, if there is a net gain during the tax year from (1) sales of property used in a trade or business, (2) involuntary conversion of property used in a trade or business, or (3) sales of capital assets held for more than one year, the gain is treated as a long-term capital gain. A net loss is treated as an ordinary loss. Puerto Rico reporting: Net gains on the involuntary conversion, or on the sale or disposition of property used in a trade or business, held for more than 6 months, are treated as “long-term capital gain.” This long-term capital gain is reported together with other long-term capital gains and is taxed as explained in the capital gains section. Except for (1) the holding period of 6 months; (2) the inclusion of involuntary conversion from casualty or theft; and (3) the replacement period of 1 year for involuntary conversions, Puerto Rico’s capital gains treatment of the property described in this paragraph is consistent with the U.S. tax treatment. Net gains or net losses on the involuntary conversion or on the sale or disposition of property used in a trade or business, held for less than 6 months, are not considered capital gains or losses. These gains or losses are reported as “ordinary income or loss.” Conversion to the U.S. return: Other gains and losses were included in the U.S. return as reported on the Puerto Rican return. U.S. reporting: IRA distributions are taxed as ordinary income in the year received. Distributions are fully taxable unless nondeductible contributions have been made. In the United States, a penalty applies if the taxpayer is not 59 1/2 years or older. Puerto Rico reporting: Similar rules apply; nondeductible contributions are not permitted. In Puerto Rico, the penalty applies if the taxpayer is not 60 years or older, with certain exceptions. Puerto Rico has a penalty provision that is similar to that of the United States for early withdrawals. Conversion to the U.S. return: This line item was used as reported on the Puerto Rican return. (continued) Issues/comments on conversion to U.S. return U.S. reporting: The United States taxes each annuity payment as if composed pro rata of taxable income and recovery of cost, projected over the life expectancy of the annuitant. An alternative method is provided for qualified plans; under this method, the total number of payments is determined based on the annuitant’s age at the starting date. Puerto Rico reporting: In the case of government pensions, Puerto Rico excludes either $5,000 or $8,000 based on age. If the taxpayer paid part or the total cost of the pension, he/she can recover that amount tax free. The excess of the amount received over 3 percent of the aggregate premiums paid is excluded from income until the amount excluded equals the aggregate premiums paid for the annuity. Taxpayers with government pensions are not required to submit Schedule H (Income from Annuities or Pensions) if their pension or annuity income is less than the exclusion amount. Since 1992, the $5,000 or $8,000 exclusion has been applied to both government and private sector pensions. Conversion to the U.S. return: This line item was included in the U.S. return as reported on the Puerto Rican return, although the cost recovery rules are different in the United States, and government pensions in the United States are taxed on the same basis as are all other pensions. U.S. reporting: The United States has complex passive loss rules, limiting the use of losses from passive activities to shelter income from other types of activities. Although a passive activity is defined as one involving the conduct of a trade or business in which the taxpayer does not materially participate, the passive loss rules treat rental activities as passive. Puerto Rico reporting: Passive activity losses may not be used to offset income from another activity. Also, excess losses may be carried forward indefinitely to offset any future income from the same activity. Partnerships that derive at least 70 percent of their gross income from Puerto Rican sources, and at least 70 percent of such income is produced in a specific enterprise, can elect to be treated as special partnerships. However, distributed losses from Special Partnerships can offset up to 50 percent of net income from any source. Puerto Rico’s regular partnerships are treated like corporations in the United States. Special partnerships and corporations of individuals (similar to S corporations) are treated like United States partnerships (pass-through entities). Conversion to the U.S. return: These income items were included in the U.S. return as reported on the Puerto Rican return. U.S. reporting: Farm income is reported and taxed in the same way as income from any other business. However, there are inventory and expense deduction rules that recognize the unique issues related to operating a farm. For example, there are special rules for the involuntary conversion of livestock or crop disaster payments. Puerto Rico reporting: Ninety percent of net farm income is exempted from reporting. Puerto Rico also includes some income and expense recognition rules that are specific to farmers. Conversion to the U.S. return: Farm income was included as reported on the Puerto Rican return with the 90-percent exclusion added back to income. (continued) Issues/comments on conversion to U.S. return U.S. reporting: Unemployment compensation is included in gross income. Puerto Rico reporting: Unemployment compensation is not included in gross income and, therefore, not reported on the income tax return. According to data provided by the Department of the Treasury of Puerto Rico, unemployment compensation totaled $336.5 million in 1994. Conversion to the U.S. return: We were not able to simulate this income item because we did not know how the total unemployment compensation was distributed among Puerto Rican taxpayers. U.S. reporting: A portion of a taxpayer’s Social Security benefits may be taxable. Puerto Rico reporting: Social Security payments are not included as income and, therefore, not reported on the income tax return. Conversion to the U.S. return: We were not able to simulate this income item. U.S. reporting: A deduction of up to $2,000 per taxpayer is allowed for IRA contributions for employees who cannot participate in certain employer-sponsored pension plans. Taxpayers who are participants in employer-sponsored plans can deduct a limited amount of IRA contributions, depending on their income. Total contributions of up to $2,250 can be made per taxpayer each year to the taxpayer’s IRA and a spousal IRA. Puerto Rico reporting: A $2,000 deduction per taxpayer is allowed, or $4,000 for married taxpayers. Limitations apply when the individual participates in cash or deferred accounts. In 1994, the IRA deduction was increased to $2,500 per taxpayer or $5,000 for married taxpayers. Conversion to the U.S. return: The IRA deduction amount was included in the U.S. return as reported on the Puerto Rican return. U.S. reporting: Certain moving expenses are deductible as an adjustment to gross income if the move is related to starting work in a new location. Puerto Rico reporting: Moving expenses are deductible as ordinary and necessary expenses within certain limitations. Conversion to the U.S. return: Because moving expenses are reported with other ordinary and necessary expenses, they were included in our simulation of miscellaneous deductions. (continued) Issues/comments on conversion to U.S. return U.S. reporting: One half of self-employment tax is deductible as an adjustment to income. Dividends typically are not included as earnings for self-employment income. However, a taxpayer’s distributed share of ordinary income from a trade or business carried on by a partnership is included in self-employment income. Puerto Rico reporting: Residents of Puerto Rico are subject to federal self-employment tax under IRC section 1402(b). Self-employed residents of Puerto Rico are to file a U.S. Internal Revenue Form 1040PR to compute self-employment tax. This return follows the same employment tax rules applicable to residents of the United States. Conversion to the U.S. return: The tax was computed by multiplying the self-employment tax rate (15.3 percent) times the amount of self-employment income. The Puerto Rican individual income tax return includes corporation dividends and distributions from regular partnerships on the same line of the return. Accordingly, we could not determine the regular partnership distribution amount that would be included as U.S. self-employment income. So that we would not overstate self-employed income and the related tax, we excluded any items from the Puerto Rican return that would not be entirely included as U.S. self-employment income. U.S. reporting: Up to 30 percent of health insurance premiums for self-employed persons are deductible as an adjustment to gross income. Puerto Rico reporting: There is no similar provision in the Puerto Rican return. Health insurance premiums for self-employed persons are deductible as an itemized deduction. Conversion to the U.S. return: Self-employed health insurance deduction was not simulated because self-employed insurance premiums and other business adjustments are offset against self-employment income in the Puerto Rican return, and our Puerto Rican individual income tax data file showed only the net self-employment income amount. U.S. reporting: Keogh retirement or SEP payments are deductible as an adjustment to gross income. Puerto Rico reporting: Keogh retirement or SEP payments are deductible as an adjustment to self-employment income. Conversion to the U.S. return: A Keogh retirement or SEP deduction was not simulated because Keogh retirement or SEP payments and other business adjustments are offset against self-employment income in the Puerto Rican return, and our Puerto Rican individual income tax data file showed only the net self-employment income amount. U.S. reporting: Penalties paid on early withdrawal of savings are deductible. Puerto Rico reporting: There is no similar line item in the Puerto Rican tax return. Conversion to the U.S. return: We were not able to simulate this income adjustment. U.S. reporting: Alimony paid is deductible as an adjustment to income. Puerto Rico reporting: Alimony paid is deductible as an adjustment to income. Conversion to the U.S. return: Alimony paid was included in the U.S. return as reported on the Puerto Rican tax return. (continued) Issues/comments on conversion to U.S. return U.S. reporting: Unreimbursed medical and dental expenses are deductible as itemized deductions to the extent they exceed 7.5 percent of AGI. Puerto Rico reporting: Under Puerto Rico’s rules, generally, the same kind of medical and dental expenses, except drug expenses, are deductible, but only 50 percent of total medical expenses paid are deductible in the year paid and to the extent they exceed 3 percent of AGI. Conversion to the U.S. return: The gross medical and dental expense amount and any orthopedic equipment expenses (see miscellaneous deductions) were included in the U.S. return as reported on the Puerto Rican tax return and adjusted for U.S. income limitation rules. U.S. reporting: Under U.S. tax rules, certain state, local, and foreign government taxes—such as real property and income taxes—are deductible as an itemized deduction. Personal property taxes are deductible only if paid or accrued to state or local governments. Puerto Rico reporting: Puerto Rico allows as an itemized deduction property taxes paid on the taxpayer’s principal residence. Puerto Rico has no personal property or local individual income taxes (below the Commonwealth level). Conversion to the U.S. return: We used the amount of property taxes as reported on the Puerto Rican tax return. No amount was simulated for personal property or local income taxes because they do not exist. We used the actual Puerto Rican tax liability after credits except for the foreign tax credit, which is largely a credit for U.S. income taxes (see foreign tax credit). U.S. reporting: The U.S. itemized deduction includes home mortgage interest and points, home equity loans, and refinanced mortgages for a qualified residence. The deduction is limited to principal amounts of $1 million for mortgages and $100,000 for home equity loans. These limits apply to mortgage or home equity loans taken out after October 1987. Additional limits apply if the mortgage exceeds the fair market value of the residence. Puerto Rico reporting: The Puerto Rican deduction includes many of the U.S. provisions, except that there are no limitation amounts and no deduction is allowed if the mortgage exceeds the fair market value of the residence at the time the debt was incurred. Conversion to the U.S. return: The Puerto Rican tax return item was used. The Puerto Rican deduction could be limited under U.S. rules. However, we did not know whether the principal amount exceeded the U.S. limits because that information is not reported on the Puerto Rican income tax return. (continued) Issues/comments on conversion to U.S. return U.S. reporting: U.S. tax rules generally allow as an itemized deduction total contributions to governmental entities, charitable organizations, cemetery companies, war veterans groups, and certain domestic fraternal societies, which are usually limited to 50 percent of AGI. Certain contributions are also limited to 30 percent or 20 percent of AGI, depending on the type of contribution. A carryover is allowed for any excess up to 5 years. Puerto Rico reporting: The Puerto Rican allowable deduction is the total amount of contributions in excess of 3 percent of AGI. The actual deduction taken must not exceed 15 percent of AGI, except an additional deduction of up to 15 percent of AGI is allowed for contributions to accredited university-level educational institutions established in Puerto Rico. Under certain circumstances an unlimited deduction for charitable contributions is allowed. After 1994, a carryover for excess charitable contributions up to 5 years was allowed. Conversion to the U.S. return: The Puerto Rican tax return line item was used. However, because of the differences stated above, the U.S. deduction may be understated. U.S. reporting: U.S. rules allow an itemized deduction for theft, vandalism, fire, storm, or similar causes; car, boat, and other accidents; and money lost due to insolvency or bankruptcy of financial institutions. Each separate casualty or theft loss must be $100 or more. Only losses that total more than 10 percent of AGI are deductible. Puerto Rico reporting: Puerto Rico limits losses of personal property to $5,000 for the year in which the loss was incurred. The carryover of excess losses is allowed for 2 years. Puerto Rico has no limit for casualty loss on a principal residence. Conversion to the U.S. return: The amount reported on the Puerto Rican return was included in the U.S. return as reported, but the U.S. limits were applied. U.S. reporting: The United States allows itemized deductions for unreimbursed employee expenses such as job travel, union dues, and job education. Other expenses are also deductible such as for investing, preparing tax returns, and renting a safe deposit box. Only amounts in excess of 2 percent of AGI are deductible. All itemized deductions are reduced by 3 percent of the amount that AGI exceeds a threshold amount. Puerto Rico reporting: Puerto Rican job expenses are deductible from AGI as “ordinary and necessary expenses” instead of as an itemized deduction. Taxpayers can deduct ordinary and necessary expenses whether or not they itemize. Generally, the expenses deductible are the same as in the United States. The amount deductible is limited to $1,500, or 3 percent of gross income from salaries, whichever is less. In 1994, the deduction of meals and entertainment expenses was reduced from 80 percent to 50 percent of the amount incurred. Conversion to the U.S. return: The amount reported on the Puerto Rican return was included in the U.S. return as a miscellaneous deduction but limited by the U.S. rules. In 1992, the rules on the deduction of meals and entertainment expenses were more restrictive in the United States. (continued) Issues/comments on conversion to U.S. return U.S. reporting: Several expenses are deductible as miscellaneous itemized deductions. However, they are not subject to the 2-percent limit. Examples of deductible items include the following: Amortizable premium on taxable bonds: Bond premiums are deductible for a bond purchased before October 23, 1986. Gambling losses to the extent of gambling winnings: The taxpayer cannot offset the losses against the winnings. He/she must report the full amount of the winnings and claim the losses as an itemized deduction. Impairment-related work expense of persons with disabilities: These are allowable business expenses incurred for the taxpayer to be able to work. Puerto Rico reporting: Bond premium amortization is allowed as an offset against interest income, and the net amount is reported as miscellaneous income; however, no deduction is allowed for interest-exempt bonds. Gambling losses are deducted from gambling winnings, and net gambling winnings are reported as miscellaneous income. Net gambling losses are not deductible. An itemized deduction is allowed for “orthopedic equipment expenses for the handicapped.” However, this deduction does not have to be directly related to the employment of the taxpayer. Conversion to the U.S. return: Since bond premium amortization is offset against interest income and gambling losses are offset against gambling winnings, we were not able to simulate a miscellaneous deduction for these items. Also, we were not able to simulate a miscellaneous deduction for orthopedic equipment expenses because under Puerto Rico’s tax law the orthopedic equipment expense deduction is not required to be work related to be deductible. However, orthopedic equipment expenses were included as medical and dental expenses (see medical and dental expenses). (continued) Issues/comments on conversion to U.S. return Child and dependent care tax credit (DCTC) U.S. reporting: Under U.S. tax rules, the DCTC allows a portion of the child and dependent care expenses for the taxpayer to obtain gainful employment, as a nonrefundable tax credit. A child must be under the age of 13 to qualify. The credit is computed on the basis of maximum allowable related expenses of $2,400 for one child, or $4,800 for two or more children. Then, depending on the AGI of the taxpayer, a credit is computed on a sliding scale from 20 percent to 30 percent of the allowable expenses. Puerto Rico reporting: Puerto Rico allows a child-care, but not dependent-care, itemized deduction of $400 for one child and $800 for two children. The expenses must be for work or a profitable activity. The child must not be over 14 years of age to qualify. The Puerto Rican return lists the expenses, but up to the limitation amount. Conversion to the U.S. return: Because Puerto Rican taxpayers that do not itemize cannot claim the child-care deduction and because the expense limits are low in comparison to the United States, simulating the credit on the basis of the information reported on the Puerto Rican return would significantly understate the potential use of the credit. Because the DCTC could be an important feature of the federal income tax system extended to Puerto Rico, we imputed the potential value of the credit on the basis of available 1991 IRS Statistics of Income data (SOI). From the SOI data, we identified the dollar value of the credit claimed by all taxpayers categorized by the number of dependent children reported and by AGI class. We then calculated the average credit claimed for each number of dependents in each AGI class. This average credit was given to each Puerto Rican taxpayer with the same number of dependents in the same AGI group. U.S. reporting: U.S. rules allow the credit for taxpayers who are 65 or older or who have a permanent and total disability. The amount of the credit depends on the taxpayer’s filing status, age, and level of pension, disability, or annuity income. Puerto Rico reporting: There is no similar credit in Puerto Rico. Conversion to the U.S. return: We were not able to calculate the credit because the necessary data were not available. U.S. reporting: See table II.2. Puerto Rico reporting: Puerto Rico does not have a similar credit. Conversion to the U.S. return: EITC was computed using information reported on the Puerto Rican tax return. (continued) Issues/comments on conversion to U.S. return U.S. reporting: The United States allows a credit or a deduction for any income, war profits, and excess profits taxes paid or accrued during the taxable year to any foreign country or to any possession of the United States. Puerto Rico reporting: Puerto Rico allows a credit for the amount of income, war profits, and excess profits taxes imposed by the United States, possessions of the United States, and foreign countries. Conversion to the U.S. return: Officials from the Department of the Treasury of Puerto Rico told us that almost all of the foreign tax credits claimed by Puerto Rican residents (about $4.4 million) on Puerto Rican individual income tax returns was from income taxes paid to the United States. (Puerto Rican residents with income from sources outside Puerto Rico are subject to federal income taxes.) Because these amounts would be the equivalent of federal tax paid, they would not be deductible on a federal income tax return. U.S. reporting: The United States has several targeted credits such as the general business credit, jobs credit, alcohol fuels credit, etc. Puerto Rico reporting: Data were not available from the Puerto Rican return to calculate any of these credits. Conversion to the U.S. return: These credits were not included in our simulation. U.S. reporting: AMT was developed to ensure that high-income taxpayers who make extensive use of certain tax deductions and exemptions pay a minimum amount of tax. AMT is computed by adding back certain tax preference items, such as certain itemized deductions, investment interest, depletion, and certain tax-exempt interest to taxable income. Certain tax preference items may have to be recomputed under special AMT rules before they are added back. After deducting an exemption amount, a tentative AMT amount is computed by multiplying the remaining income by either a 26-percent or 28-percent tax rate. The difference between the tentative AMT and the regular tax is the amount of AMT actually owed. The tentative AMT is then added to the regular income tax if it is greater than the regular tax. Puerto Rico reporting: Puerto Rico has an alternate basic tax that will be assessed if it is greater than the regular tax. The tax is computed by subtracting ordinary and necessary expenses and capital gains from AGI. Then an additional tax of 10 percent to 20 percent is calculated on alternative AGIs of over $75,000. The regular tax or the alternate basic tax is paid, whichever is larger. Conversion to the U.S. return: Since computing the U.S. AMT requires the application of complex rules for several income and deduction items, it requires a substantial amount of data to be accurately applied. Some of the data needed to apply these rules is not available on the Puerto Rican return, such as certain types of tax-exempt interest income. Accordingly, the AMT tax was not computed for our simulation. EITC is a refundable tax credit available to low-income working taxpayers. The credit was established in 1975 to achieve two long-term objectives: (1) to offset the impact of Social Security taxes on low-income workers with families and (2) to encourage low-income individuals with families to seek employment rather than welfare. EITC amounts generally are determined according to the amount of the taxpayers’ earned income and whether they have qualifying children who meet certain age, relationship, and residency tests, which are described in table II.2. The credit gradually phases in, plateaus at a maximum amount, and then phases out until it reaches zero. If the taxpayers’ earned income or AGI exceeds the maximum qualifying income level, they are not eligible for the credit. When the taxpayers’ AGI falls in the credit’s phase-out range, they receive the lesser amount resulting from using either their earned income or AGI in calculating the credit. When changes made in the 1993 Omnibus Budget Reconciliation Act are fully in effect in tax year 1996, taxpayers with two children and whose earned income ranges from $1 to $8,890 are to receive $0.40 for each dollar earned. Taxpayers with two children and whose incomes range from $8,890 to $11,610 are to receive the maximum credit amount of $3,556. The credit will gradually phase out, declining at a rate of about $0.21 for each additional dollar of income, for taxpayers with two children and incomes ranging from $11,610 to $28,495. Taxpayers with one qualifying child or no children receive EITC at a lower rate, with different plateau amounts and phase-out rates. Beginning in 1996, taxpayers will be disqualified for EITC if their unearned income exceeds $2,350. Unearned income is defined as the combined amount of taxable and tax-exempt interest income, dividends, and the net income from rents and royalties not received from a trade or business. The following table summarizes the principal EITC qualification rules and details the extent to which the Puerto Rican tax return provides data for determining eligibility for the credit. See qualifying child definition below. Puerto Rico reporting: The Puerto Rican tax return reports residency in Puerto Rico at the end of the tax year, not residency for more than one-half of the tax year. Though age information is reported on the Puerto Rican return, this information was not made available for our simulation. The return does not ask whether the taxpayer is taken as a dependent on another taxpayer’s return. EITC estimate: We assumed that the taxpayer had been a resident for more than one-half of the tax year, was not taken as a dependent on another taxpayer’s return, and met the age requirements. (3) Must not have disqualified income of more than $2,350. Disqualified income includes interest, dividends, and net income from rents and royalties not received from a trade or business. Puerto Rico reporting: Specific income items can be identified on the Puerto Rican tax return. However, the Puerto Rican return includes royalties as part of miscellaneous income. EITC estimate: Since royalties and net rental income derived from nonbusiness or trade activity can not be specifically identified, they were not included in our simulation of the U.S. tax return. (1) Relationship test: must be son, daughter, adopted child, or descendant of the son, daughter, or adopted child; stepson or stepdaughter; or foster child. Married child is not eligible unless he or she is a dependent. Puerto Rico reporting: The Puerto Rican return identifies some relationships between the taxpayer and a dependent. The identified relationships include child, parent, in-laws, and “closely related.” However, the return does not identify the specific relationships needed to comply with the EITC requirements. Qualifying child definition: We only included as qualifying children those dependents identified on the Puerto Rican return as “children.” We were unable to include other eligible dependents in our simulation, such as “stepson or stepdaughter,” because they are identified on the Puerto Rican return as “closely related,” which includes other ineligible dependents. (2) Age Test: must be under age 19, a full-time student under age 24, or permanently and totally disabled. Puerto Rico reporting: Age and dependent status information is on the Puerto Rican tax return. Disabled persons also qualify as dependents. Qualifying child definition: Dependents listed on the Puerto Rican return as a nonuniversity student or university student and who met the age and relationship tests were counted as qualifying children. In the special case of head-of-household filers, when one dependent is selected as qualifying the filer for head-of-household status and included in a special dependent section, we assumed this dependent to have met the qualifying child requirements (age and relationship). (continued) Issues/comments on conversion to U.S. return (3) Residence test: child’s principal place of residence is with the taxpayer in the United States for more than one-half year (the entire year, in the case of an eligible foster child). There are special rules for members of the U.S. armed services. Puerto Rico reporting: Under Puerto Rican tax rules, a child’s residence does not have to be with the taxpayer to qualify as a dependent. Also, the Puerto Rican return requests information on residency in Puerto Rico at the end of the tax year. It does not request information about whether more than one-half of the year was spent in the United States or Puerto Rico. Qualifying child definition: We assumed that the child had been a resident for more than one-half of the tax year. Puerto Rico reporting: Many earned income items can be identified on the Puerto Rican tax returns, including wages, salaries, tips, and self-employment income. These self-employment income items may vary from those which would have been reported on U.S. returns because of differences in tax accounting rules, such as those related to depreciation of assets. Earned income definition: We used data elements for wage, salaries, and tips as shown on the Puerto Rican return. Items included in our approximation to U.S. self-employment income were (1) profits or losses from special partnerships, (2) profits or losses from commissions, (3) profits or losses from agriculture, (4) profits or losses from professions, and (5) profits or losses from rental businesses. Puerto Rican tax rules for corporations have many similarities to and some differences from U.S. tax rules. Both the United States and Puerto Rico require corporations to report their worldwide income. Also, both Puerto Rico and the United States allow the deduction of “ordinary and necessary” business expenses and have similar rules on accounting for inventories and cost of goods sold. Prior to June 30, 1995, Puerto Rico allowed, under certain circumstances, businesses to expense up to 100 percent of the basis of business assets in the year of acquisition and thereafter. This provision was repealed in Puerto Rico’s Tax Reform Act of 1994 for assets acquired after June 30, 1995. Puerto Rico has generally higher marginal corporate tax rates than does the United States. In 1995, corporate taxes in the United States started at 15 percent for incomes of up to $50,000, with a maximum corporate tax rate of 35 percent. In 1992, Puerto Rico’s regular corporation tax rate started at 22 percent. Also, a sliding scale surtax was added to the regular tax, starting at a marginal rate of 6 percent for incomes up to $75,000 with an allowance of a special credit. The maximum surtax marginal rate was 20 percent for incomes over $275,000. Puerto Rico also has an alternative corporate capital gains tax rate of 25 percent and an alternative dividend rate of 20 percent. The Puerto Rico Tax Reform Act of 1994 lowered the regular corporate tax rate to 20 percent, the maximum surtax rate to 19 percent, and the alternative dividend rate to 10 percent. Both the United States and Puerto Rico offer corporations special tax incentives to meet a variety of economic goals. In the United States these incentives can be either additional deductions from income or tax credits. Some examples of these incentives include accelerated depreciation of buildings, credits for low-income housing, expensing of research and experimentation expenditures, or the possessions tax credit. Puerto Rico’s tax code also includes various deductions and tax credits as incentives. However, since 1947, Puerto Rico has offered a tax incentive program to encourage the establishment and growth of manufacturing and certain other businesses. Most recently, the Puerto Rico Industrial Incentive Act of 1987 provided several tax reductions to industrial units that, for example, manufacture products that had not previously been made in Puerto Rico, produce products designated for export, develop specific types of real estate, or produce energy from recycling or renewable sources. In general, these businesses are exempted from taxation on 90 percent of the net income derived from these sources; and the same percentage for eligible interest and dividends; currency exchange; and patents, royalties, and other rights. The act also includes a package of municipal, personal property, and real property tax reductions. The rate reductions are not permanent. The duration of the rate reductions depends on the location of the exempt business and varies from 10 to 25 years. However, the exempted businesses are allowed the option of selecting the specific years they will be exempt from taxation under the Industrial Development Act. According to statistics provided by the Commonwealth, in 1993, 1,111 corporations were qualified under the Industrial Tax Exemption laws, with about $10.7 billion of exempted income. One U.S. tax policy significantly affecting Puerto Rico is the possessions tax credit defined in section 936 of the federal Internal Revenue Code (IRC). Under this section of the IRC, a portion of income derived from operations of qualified subsidiaries of U.S. corporations in U.S. possessions is effectively exempted from U.S. income tax. Firms are qualified for the credit if, over the 3-year period preceding the close of a taxable year, 80 percent or more of their income was derived from sources within a possession, and 75 percent or more of their income was derived from the active conduct of a trade or business within a possession. The 1993 Budget Reconciliation Act limited the possessions tax credit.For tax years beginning after 1993, taxpayers are to calculate the credit as under prior law, but the credit would be capped under one of two alternative options selected by the taxpayer: The “percentage limitation” option provides for a decreasing credit equal to a decreasing percentage of the amount computed under prior law. The percentages are set by law at 60 percent for 1994, 55 percent for 1995, 50 percent for 1996, and 45 percent for 1997. The percentage will be 40 percent for 1998 and thereafter. The “economic activity limitation” option provides a cap on the credit equal to the sum of three factors: The first factor is 60 percent of the firm’s wages plus allocable employee fringe benefits paid in the possession, with wages limited for each employee to 85 percent of the maximum wage base under the old age survivor and disability insurance portion of Social Security. The second factor is a specific percentage of the firm’s depreciation deductions for qualified tangible property for each taxable year. The type of property defines the applicable percentage, with factors ranging from 15 percent for property with a relatively short recovery period to 65 percent for assets with a long recovery period. The third factor, which applies only to firms that do not use the 50-percent profit-split method of income allocation is a portion of the income taxes paid to the possession government. Included taxes, however, cannot exceed a 9-percent effective tax rate. U.S. and Puerto Rico’s tax laws for partnerships have several significant differences. With a few exceptions, U.S. partnerships are not taxable entities. Distributions of partnership profits are included on the partner’s individual income tax return and are taxed at personal income tax rates. In contrast, Puerto Rico taxes regular partnerships on their net income at corporate tax rates and also requires partners to include distributed partnership profits as taxable income on their individual income tax returns. “Special partnerships” in Puerto Rico are not taxed at the entity level. Instead, as is the case with U.S. partnerships, partners include on their individual income tax returns their distributable shares of partnership net income. To qualify as a special partnership, 70 percent of the partnership’s gross income must come from Puerto Rican sources. Further, not less than 70 percent of such income must be generated from one of several activities including construction, land development, or manufacturing when it generates substantial employment. The following tables compare the actual federal, state, local, and combined individual income taxes of the 50 states, the District of Columbia, and Puerto Rico. The income tax is measured in per-capita terms (table IV.1) and as percentage of total personal income (table IV.2.) In addition, table IV.3 shows the distribution of general revenue sources for Puerto Rico, the 50 States, and the District of Columbia. N.T. N.T. N.T. N.T. (continued) N.T. N.T. N.T. N.T. N.T. N.T. N.T. N.T. N.T. N.T. (continued) N.T. = No Tax Puerto Rico/ fifty states/D.C. N.T. N.T. N.T. N.T. N.T. N.T. (continued) Puerto Rico/ fifty states/D.C. N.T. N.T. N.T. N.T. N.T. N.T. N.T. N.T. N.T. Daniel E. Coates, Senior Economist The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO provided information on the potential effects of extending the Internal Revenue Code (IRC) to residents of Puerto Rico. GAO found that if IRC tax rules are applied to residents of Puerto Rico: (1) the residents would owe around $623 million in federal income tax before taking into account the earned income tax credit (EITC); (2) the aggregate amount of EITC would total $574 million; (3) 59 percent of the population filing individual income tax returns would earn some EITC; (4) 41 percent of the households filing income tax returns would have positive federal income tax liabilities, 53 percent would receive net transfers from the federal government, and 6 percent would have no federal tax liability; (5) more Puerto Rican residents and married couples would file federal tax returns if they qualified for EITC; (6) the average EITC earned by eligible taxpayers would be $1,494; (7) the Puerto Rican government would have to reduce its own individual revenue by 5 percent to keep the aggregate amount of income tax levied on its residents constant; (8) the taxes paid by certain classes of Puerto Ricans would change drastically; (9) the per capita combined individual income tax in Puerto Rico would increase by 5.5 percent; and (10) tax expenditures would total $2.8 billion in 1996 and $3.4 billion in 2000. |
According to USDA, the National School Lunch Program and the National School Breakfast Program share the goals of improving children’s nutrition, increasing lower-income children’s access to nutritious meals, and supporting the agricultural economy. USDA’s commodity program contracts for the purchase of food for these programs with manufacturers that it selects through a competitive bidding process. At the state level, state education departments typically administer the meals programs and forward the commodity selections of individual schools to USDA’s commodity program, which purchases and distributes the food selected by schools. In 2009, schools most commonly ordered chicken, mozzarella cheese, potatoes, and ground beef items purchased by the commodity program, in addition to fresh produce purchased for the commodity program by DOD in conjunction with DOD’s large-scale efforts to supply fresh produce to its troops. Overall, USDA provides about 15 to 20 percent of the food served in school meals. Schools purchase the remainder independently using their own procurement practices, either purchasing foods directly from manufacturers or distributors or contracting with food service management companies that procure the food for them. Three agencies within USDA are primarily responsible for the planning, purchase, allocation, and distribution of commodities to states and school districts: the Food and Nutrition Service, the Agricultural Marketing Service, and the Farm Service Agency (referred to collectively in this report as USDA’s commodity program). In addition to administering the National School Lunch Program and the National School Breakfast Program, the Food and Nutrition Service has overall authority to administer USDA’s commodity program and coordinate all commodity orders submitted by states. The Agricultural Marketing Service purchases meats, poultry, seafood, fruits, and vegetables; while the Farm Service Agency purchases dairy products, grains, peanut products, and other items. Virtually all food for sale in the United States must comply with federal food safety laws and regulations. Federal efforts for ensuring food safety include focusing on preventing or reducing contamination by bacterial pathogens such as E. coli O157:H7, a toxin-producing strain of the intestinal bacterium E. coli; Salmonella; and Campylobacter; and monitoring levels of other bacteria, such as generic E. coli and fecal coliforms, which indicate the extent to which food was produced under sanitary conditions. USDA, through its Food Safety and Inspection Service (referred to throughout this report as USDA’s meat and poultry regulatory program), is responsible for ensuring the safety of meat, poultry, and processed egg products, and FDA is responsible for ensuring the safety of virtually all other food products, including grains, nuts, and produce. GAO has reported that federal oversight of food safety remains fragmented in several areas, and that this fragmentation has caused inconsistent oversight, ineffective coordination, and inefficient use of resources. Existing statutes give these agencies different regulatory and enforcement authorities. For example, food products under USDA’s jurisdiction must generally be inspected and approved as meeting federal standards before being sold to the public. Under current law, thousands of regulatory inspectors of meat and poultry are to maintain continuous inspection at slaughter facilities and examine all slaughtered meat and poultry carcasses. They also visit other meat- and poultry-processing facilities at least once each operating day. FDA is responsible for ensuring that all foods it regulates are safe, wholesome, and properly labeled. To carry out its responsibilities, FDA has authority to, among other things, conduct examinations and investigations and inspect food facilities. But unlike foods regulated by USDA, food products under FDA’s jurisdiction may be marketed without FDA’s prior approval. For fresh cut fruits and vegetables, FDA has issued guidance, which food manufacturers may voluntarily use to minimize microbial contamination. FDA has also established regulations that serve as the minimum sanitary and processing requirements and may take enforcement actions against firms that do not comply with these requirements. Under the FDA Food Safety Modernization Act, the agency is required to promulgate regulations for produce safety that would establish science-based minimum standards for the safe production and harvesting of certain raw fruits and vegetables for which FDA determines such standards could minimize the risk of serious adverse health consequences or death. While food may be contaminated by many different bacteria, viruses, parasites, toxins, and chemicals, this report focuses on disease-causing, or pathogenic, bacteria. Contamination may take place during any of the many steps in growing, processing, storing, and preparing foods. Some potentially life-threatening pathogens live in soil, water, or the intestinal tracts of healthy birds, domestic animals, and wildlife. As a result, produce may become contaminated if irrigated with tainted water, and the carcasses of livestock and poultry may become contaminated during slaughter if they come into contact with small amounts of intestinal contents. Foods that mingle the products of many individual animals— such as bulk raw milk, pooled raw eggs, or raw ground beef—are particularly susceptible, because a pathogen from any one of the animals may contaminate the entire batch. A single hamburger, for example, may contain meat from hundreds of animals. Pathogens can also be introduced later in the process—such as after cooking, but before packaging—or by unsanitary conditions—including contact with infected food handlers or contact with contaminated equipment or surfaces. Still, pathogens are generally destroyed when foods are properly cooked. In addition, the presence of pathogens can be greatly reduced by subjecting food to ionizing radiation, known as food irradiation. On the basis of extensive scientific studies and the opinions of experts, we reported in 2000 that the benefits of food irradiation outweigh the risks. According to the Centers for Disease Control and Prevention (CDC), foodborne disease is a major cause of illness and death in the United States. CDC routinely gathers information from local and state health departments and laboratories and reports information about a range of foodborne illnesses and the foods with which they are associated. In 2011, CDC estimated that approximately 48 million people become sick, 128,000 are hospitalized, and 3,000 die each year from foodborne diseases. CDC attributed about 90 percent of the illnesses, hospitalizations, and deaths having a known cause to eight pathogens, including four bacteria— Salmonella, Campylobacter, E. coli O157:H7, and Listeria monocytogenes—that are included in USDA’s regulatory oversight of meat and poultry and in the purchasing specifications of USDA’s commodity program (see table 1). The four other pathogens are norovirus, Clostridium perfringens, and Staphylococcus aureus—which are most often spread by improper food handling or contamination by infected food handlers—and Toxoplasma gondii, a parasite commonly found in people and the environment that typically does not result in illness. The commodity program requires testing for Staphylococcus aureus as an indicator of poor sanitary handling or preparation conditions in raw ground beef, diced cooked chicken, and baby carrots. Information reported to CDC shows hundreds of instances of foodborne outbreaks affecting children in schools during a recent 10-year period. An outbreak occurs when two or more similar illnesses result from the consumption of a common food. According to CDC documents, many clusters of illnesses are not investigated or reported to CDC because of, among other reasons, competing priorities at state and local health agencies, and because only a small proportion of all foodborne illnesses reported each year are identified as associated with outbreaks. Nevertheless, based on CDC’s outbreak data for the 10 years from 1999 through 2008 (the most recent year for which data are available), we identified 478 foodborne outbreaks, affecting at least 10,770 children, that were associated with schools. Although these outbreaks were associated with foods prepared or consumed at schools, they do not all relate to food served as part of school meal programs. For example, the implicated food may have been prepared at home and consumed at school as part of an event. Nevertheless, the number of outbreaks associated with schools represents about 4 percent of the approximately 12,000 foodborne outbreaks reported to CDC during that period by state and local public health agencies. As with foodborne disease outbreaks generally, most outbreaks associated with schools could not be attributed to a single contaminated ingredient, and many outbreaks’ association with a pathogen could not be confirmed by a laboratory. We found that Salmonella was among the most common bacterial pathogens identified as causing outbreaks associated with schools. Moreover, when outbreaks associated with schools could be linked to a specific food, they were most commonly associated with contaminated ingredients such as poultry, fruits, grain and bean products, dairy, beef, leafy vegetables, and pork. For seven of the foods it purchases, the commodity program’s specifications related to microbial contamination are more stringent than federal regulations for those foods in the commercial marketplace. Nevertheless, the program’s more-stringent purchasing specifications may not apply to all foods and pathogens of concern. For 7 of the approximately 180 commodity foods offered to schools, USDA’s commodity program has established purchasing specifications with respect to microbial contamination that are more stringent than the federal regulations for the same foods available in the commercial marketplace. For example, the commodity program will not purchase raw ground beef that tests positive for Salmonella. On the other hand, USDA regulations for commercially available raw ground beef tolerate the presence of a certain amount of Salmonella. Specifically, a facility meets regulatory performance standards if, on the basis of USDA’s regulatory inspections, 7.5 percent or less of raw ground beef samples the agency collects test positive for Salmonella. In addition, while the commodity program rejects all raw boneless or ground beef that tests positive for E. coli O157:H7, USDA regulations allow such beef to enter commerce if it is first cooked. Moreover, the commodity program, through its purchasing specifications, rejects ground turkey and diced cooked chicken if microbial testing reveals levels of certain bacteria, which indicate deficiencies in sanitation during production of these foods, are above established limits. Federal regulations, on the other hand, do not require that these same foods destined for the commercial marketplace be tested for these organisms. Table 2 lists the seven foods for which the commodity program’s purchasing specifications related to microbial contamination are more stringent than federal regulations. Officials of USDA’s commodity program told us that more-stringent standards are needed for certain foods in the commodity program because commodity foods go to school-age children as well as populations, such as very young children, who are considered at a higher risk than the general population for serious complications from foodborne illnesses. For the remainder of the 180 commodity foods, the purchasing program requires that suppliers meet existing federal regulations for food in the commercial marketplace. For example, all ready-to-eat meat and poultry must adhere to federal regulatory limits for Listeria monocytogenes. Commodity program officials told us they selected products for more- stringent specifications on the basis of their views of the safety risk associated with different types of food. For example, in their view, raw meat products that are ground present a higher risk than other meat products because they include meat from the surface of carcasses that, if contaminated, could spread contamination throughout a large volume of finished raw ground product. Similarly, one contaminated egg could spread contamination through a large batch of liquid eggs. Also, program officials said that cooked diced chicken requires additional microbial testing because it is handled after cooking and before packaging. While officials of USDA’s commodity program told us they consult with a variety of groups and individuals in developing purchasing specifications related to microbial contamination, they did not document these informal consultations. For example, commodity program officials said some purchasing specifications, such as those for raw ground beef, were based in part on consultations with industry representatives and other agencies within USDA, while other purchasing specifications were based on information that has been gathered over time through informal consultation with internal and external food safety experts. Commodity program officials also stated that they consult with USDA’s meat and poultry regulatory program and food safety experts as they change purchasing specifications. In addition, commodity program officials stated that, each year, USDA’s meat and poultry regulatory program and one of USDA’s research agencies review the purchasing specifications for some of the meat, poultry, and liquid egg products to ensure that the specifications meet minimum regulatory requirements. Nevertheless, commodity program officials told us they did not maintain documentation regarding the process by which they developed their purchasing specifications for the seven products that have more-stringent specifications related to microbial contamination. In addition, we have previously reported that when agencies relied on informal coordination mechanisms and relationships with individual officials to ensure effective collaboration, the efforts may not continue once personnel move to their next assignments. While USDA’s commodity program has more-stringent purchasing specifications related to microbial contamination for seven products, it has not developed more-stringent specifications for some commodities it provides to schools that have been associated with foodborne illness and outbreaks. For example, according to data collected by CDC, poultry is among the most common foods associated with foodborne illnesses and outbreaks and has been associated with bacterial pathogens such as Salmonella, Campylobacter, and Clostridium perfringens. While most of the poultry items the commodity program provides to schools are precooked, the program does provide raw, whole chickens cut into eight pieces to schools. Despite food safety concerns about this product, however, the commodity program does not have more-stringent purchasing specifications related to testing and sampling for microbial contamination for it, as it does for other foods that present food safety risks. Nevertheless, according to program officials, other specifications for this product—such as holding it within certain temperatures and processing it within 7 calendar days after slaughter—are designed to control microbial contamination. In addition, USDA’s commodity program has more-stringent purchasing specifications for one of the ready-to-eat meat and poultry products it provides to schools—diced cooked chicken—but not for others. The commodity program provides schools several ready-to-eat meat and poultry products, including cubed ham and smoked turkey breasts. These products, like all ready-to-eat meat and poultry products, must not test positive for Listeria monocytogenes, in accordance with federal regulatory requirements. The commodity program, in its purchasing specifications, does not require testing for any additional pathogens or other bacteria for these food products, as it does for the cooked diced chicken it purchases. Program officials explained that they believe most of the ready-to-eat meat and poultry products they purchase present less of a contamination risk because they are placed in sterile sealed packages for cooking and shipping, but others have raised concerns about these types of products. For example, representatives of a large food distributor we interviewed stated that ready-to-eat meat and poultry products are their biggest food safety concern after raw meat and poultry. One food industry safety expert told us he thought that all of the commodity program’s ready- to-eat meat products should have more-stringent specifications related to microbial contamination. One large urban school district we interviewed required its commercial suppliers to test all ready-to-eat meat and poultry products for a variety of pathogens and other bacteria, including Clostridium perfringens, Shigella, and Staphylococcus aureus, in addition to Salmonella and Listeria monocytogenes. Finally, according to active surveillance conducted by CDC, the incidence of Listeria monocytogenes in 2009 was at its highest rate since 1999. Similarly, USDA’s commodity program has more-stringent purchasing specifications related to microbial contamination for some of the fresh produce items it provides to schools but not others that have been associated with foodborne illness and outbreaks. Currently, the commodity program applies purchasing specifications related to microbial contamination to minimally processed fresh produce items—sliced apples and baby carrots—but not to other fresh produce items. However, these two commodities are only offered on a trial basis to a limited number of schools. Most of the fresh produce—including most of the minimally processed items such as sliced apples and baby carrots—that schools obtain through the commodity program is purchased by DOD. The agreement between the commodity program and DOD does not require DOD to use the same purchasing specifications related to microbial testing that the commodity program uses for the produce it purchases. DOD officials told us the agency relies on federal regulations to ensure food safety but may occasionally test fresh produce items for microbial contamination. In contrast, the commodity program requires its suppliers to test for pathogens and other bacteria on an ongoing basis. Therefore, baby carrots and sliced apples purchased by the commodity program undergo more-stringent microbial testing than the baby carrots and sliced apples purchased for schools by DOD. Because commodity program specifications are more stringent than DOD specifications for these products, the commodity program initiated conversations with DOD officials in 2010 to explore having DOD use the more-stringent standards, according to commodity program officials. DOD purchases most of the other fresh produce distributed to schools in the commodity program and relies on current federal regulations that do not require microbial testing for produce in the commercial marketplace. DOD officials told us they do not have any more-stringent purchasing specifications related to microbial contamination for any of these produce items. While the commodity program purchases and distributes to schools a few fresh produce items—whole apples, oranges, pears, and potatoes— in addition to baby carrots and sliced apples, DOD purchases and distributes to schools several times the amount of fresh and minimally processed produce purchased by the commodity program and a wider variety of produce items, including grapes, lettuce, celery, broccoli, and spinach. In recent years, many foodborne disease outbreaks and illnesses have been associated with fresh produce, including items like those that DOD purchases for schools. For example, in 2006, bagged spinach contaminated with E. coli O157:H7 sickened an estimated 238 people, killed 5 people, and cost the industry an estimated $80 million in lost sales. As a result, the company most closely linked to this outbreak now routinely tests its spinach and other leafy greens for E. coli O157:H7. While DOD did not purchase this contaminated bagged spinach item or distribute it through the commodity program, according to DOD and USDA officials, DOD does purchase other bagged spinach products and provides them to schools. In addition, in the past year, chopped celery contaminated with Listeria monocytogenes was linked to an outbreak in one state that resulted in 5 deaths, and alfalfa sprouts contaminated with Salmonella sickened an estimated 140 people in 26 states and the District of Columbia. Officials we interviewed in a midsize urban school district said they do not serve what they called “high-risk” raw produce items, such as spinach and bean sprouts, because children are at a higher risk of complications from foodborne illness. Recently recognized pathogens have been associated with a variety of foods, including meat and fresh produce, that are not addressed either by the commodity program’s purchasing specifications or by federal regulations. Specifically, public health officials have shown that at least six strains of E. coli other than E. coli O157:H7 produce the same potentially deadly toxins and life-threatening illness. CDC has estimated that these strains cause approximately 113,000 illnesses and 300 hospitalizations annually in the United States. Outbreaks associated with these six strains of E. coli have involved lettuce, raw ground beef, and berries, among other foods, according to CDC. For example, in 2010, two students in New York state developed a disease with complications, such as kidney failure and anemia, after consuming romaine lettuce contaminated with one of these strains, which the school district purchased commercially. Officials in this district told us that, as a result of the outbreak, the district reduced the amount of lettuce it served and stopped purchasing the particular bagged lettuce product associated with the outbreak. Although USDA’s commodity program has not developed any purchasing specifications related to microbial contamination to address the risks from these non-O157 strains of E. coli, federal regulatory agencies have considered taking action to address them, and some food companies have begun to test their products for these strains. In October 2007, USDA, FDA, and CDC cosponsored a public meeting to consider the public health significance of non-O157 E. coli in the U.S. food supply. As of February 2011, USDA’s meat and poultry regulatory program is considering conducting routine testing for the presence of six non-O157 strains of E. coli in certain raw beef products. In addition, some companies in the food industry have developed their own tests and are currently using these methods to determine whether the food they produce is contaminated with strains of non-O157 E. coli. For example, we visited one produce company that routinely tests its leafy greens for these strains. In addition, USDA’s meat and poultry regulatory program has collaborated with industry to develop tests that could rapidly detect six such strains in raw ground beef. As of February 2011, officials for USDA’s meat and poultry regulatory program said that the department had developed standardized tests to detect all six strains. While virtually all food for sale in the commercial marketplace must meet federal regulatory requirements, federal agencies and others may apply more-stringent purchasing specifications in the contracts they use to purchase food. USDA’s commodity program has several purchasing specifications related to microbial contamination for raw ground beef production, process oversight, and testing. Like the commodity program, some other large purchasers of raw ground beef that we interviewed have purchasing specifications in similar areas, although the specifications differ in certain details. In response to a request from the commodity program, the National Research Council found that the scientific basis for the program’s purchasing specifications for raw ground beef, which were revised in 2010, is unclear. The purchasing specifications for raw ground beef set by USDA’s commodity program in 2010, which are more stringent than federal regulatory requirements for foods in the commercial marketplace, are designed to prevent, reduce, or eliminate microbial contamination through (1) steps taken when cattle are slaughtered, (2) oversight of the suppliers’ slaughter and grinding processes, and (3) microbial testing of the raw ground beef at different points in the production process from slaughter through grinding. The commodity program’s purchasing specifications include the following: Steps when cattle are slaughtered: The slaughter processes used by beef suppliers must include at least two actions—known as antimicrobial interventions—designed to reduce the level of pathogens on the beef carcasses. One of these interventions must occur at a critical point in the production process where such interventions are likely to effectively reduce pathogen levels. For example, beef suppliers may use interventions to control contamination of the carcass from the hide during skinning or from the gastrointestinal tract during evisceration, or to control the growth of pathogens when the carcass is chilled or when the finished product is stored. Suppliers may use such interventions as organic acids, hot water, or steam applied to the carcass; physical actions; or a combination of interventions in sequence. For example, a slaughter facility might combine a physical intervention, such as trimming away visible contamination on the carcass with a knife, with other antimicrobial interventions, such as spraying the carcass with very hot water, to improve the microbial safety of the beef carcass after slaughter, skinning, and evisceration. In addition, beef suppliers must validate—either through existing agency guidance or studies they conduct—that the interventions they use reduce the level of harmful pathogens on carcasses by at least 99.9 percent. Oversight of suppliers’ slaughter and grinding processes: Before purchasing raw ground beef from a supplier, commodity program officials visit the supplier’s facilities to evaluate, among other things, its quality control programs, equipment, and documentation that the supplier’s product complies with the program’s specifications. After purchases have begun, commodity program officials periodically inspect the supplier’s facilities, processes, and documentation at a frequency dictated by the size of the purchases. For example, these inspections occur monthly for suppliers with multiple, ongoing contracts, and they occur at least once during each contract period for suppliers with intermittent contracts. If deficiencies are discovered, these inspections may occur more often. Finally, when raw ground beef is being produced, commodity program officials must be present to monitor the supplier’s performance, verify compliance with the program’s specifications, and obtain samples of raw ground beef for microbial testing, among other things. Microbial testing of raw ground beef at different points during production: Beef suppliers must send samples of raw boneless beef before and after it is ground to a laboratory, accredited by the commodity program, where the samples are tested for the full range of microbes detailed in the commodity program’s purchasing specifications. Under the current specifications, samples must be taken from each 2,000-pound lot of raw boneless beef to be ground and each 10,000-pound lot of finished raw ground beef. Samples of finished raw ground beef are selected at 15- minute intervals during grinding. Suppliers may not distribute the raw ground beef to schools until the test results are known. In the event that test results reveal the presence of Salmonella or E. coli O157:H7, the supplier must notify both the commodity program and USDA’s meat and poultry regulatory program. The commodity program rejects raw ground beef contaminated with these two pathogens. The commodity program uses test results of other bacteria to help ensure that the raw ground beef it distributes to schools is produced under sanitary conditions. If the levels of these bacteria exceed certain thresholds, the commodity program rejects the affected lot of raw boneless beef or ground beef. Suppliers that fail to maintain sanitary conditions are barred from producing raw boneless beef or ground beef for the commodity program until they take corrective action to restore sanitary conditions. The seven large purchasers of raw ground beef we interviewed (six large private-sector purchasers—including grocery store chains and quick- service restaurants—and one large federal purchaser) relied on purchasing specifications related to microbial contamination for raw ground beef production, process oversight, and testing that were the same or substantially similar to those used by USDA’s commodity program, with variation in such things as the number or placement of required antimicrobial interventions designed to reduce microbial contamination. The specifications used by these purchasers, like those used by the commodity program, call for more-stringent testing for microbial contamination than do federal regulations for the same foods in the commercial marketplace. Officials at a meatpacking plant we visited said that both the commodity program’s specifications and those of its large, private-sector customers include high standards with only slight differences. In addition, two large purchasers pointed out that specifications may vary depending on the intended use of the raw ground beef. For example, a quick-service restaurant chain that maintains strict control over its cooking processes may have specifications that differ from those of the commodity program and grocery store chains, which have no control over how the raw ground beef they purchase is cooked. The purchasing specifications shared by the seven purchasers we interviewed are generally as follows: Steps when cattle are slaughtered: All but two of the large purchasers told us they require suppliers to apply interventions on beef carcasses to reduce the level of pathogens and other bacteria, as the commodity program does. These purchasing specifications are more stringent than federal regulatory requirements. The specifications used by these purchasers differ in terms of the number of interventions to apply, where in the production process to apply the interventions, and the target level for the reduction of pathogens. Number of interventions: Although three of these purchasers, like the commodity program, require two interventions, one required three, one required seven, and another purchaser did not dictate the number of interventions, as long as its suppliers achieved a given reduction in the levels of pathogens. Where to apply interventions: Some of these purchasers specify where interventions should be applied. For example, like the commodity program, one purchaser requires that at least one intervention be applied at a critical point in the production process where such interventions are likely to effectively reduce pathogen levels. Another purchaser stipulates that both interventions it requires be applied at such critical points. Target levels for pathogen reduction: Specifications for the level of pathogen reduction ranged from removing 99 percent of pathogens to removing 99.9 percent. One purchaser did not specify a target for reduction of pathogens but requires its boneless beef suppliers to demonstrate that their processes will reduce E. coli O157:H7 to nondetectable levels. The purchaser that did not include additional measures to reduce the level of pathogens and other bacteria on beef carcasses in its purchasing specifications told us it relied on federal regulatory requirements that were designed to ensure the safety of raw ground beef. This purchaser also said, however, that some of its suppliers may apply interventions or other measures that are more stringent than federal regulations as part of their routine business practices. Oversight of suppliers’ slaughter and grinding processes: All the purchasers we interviewed use one or more of the following measures to oversee the performance of their raw boneless beef and ground beef suppliers: initial approval of suppliers, periodic inspections, and on-site presence during grinding. But they differ in their specifications for who must conduct the inspections and how frequently the inspections must occur as follows: Like the commodity program, most of the purchasers require initial approval of potential suppliers and purchase raw boneless and ground beef only from approved suppliers. For example, one purchaser said it requires that both its suppliers and grinders certify that they can meet its quality specifications before contracting with them. All of the purchasers told us they require periodic inspections of their beef suppliers or grinders; most use both their own employees and third parties to conduct these inspections. For example, one purchaser uses its own employees and those of its grinders to inspect its suppliers of boneless beef at least once annually. This purchaser also requires both its raw boneless beef and its raw ground beef suppliers to undergo at least one annual audit by a third party. One purchaser had its own employees on site when its beef was being ground—as the commodity program does—because all its raw ground beef is produced either at a large company-owned facility or in its own stores. Microbial testing of raw ground beef at different points during production: Most of the purchasers we interviewed told us they require their suppliers to sample beef before and after it is ground, to test these samples for pathogens, and to meet specified thresholds related to those pathogens. Their specifications differed, however, in terms of how they sampled raw boneless beef and ground beef and the microbial testing they require as follows: One purchaser said it requires that samples be gathered twice from each 2,000-pound lot of boneless beef, once before it leaves the meatpacking plant, and once when the lots arrive at the grinder. Another purchaser, like the commodity program, required samples of finished raw ground beef to be taken every 15 minutes during grinding, and one required samples to be taken about every 9 minutes. Like the commodity program, most of these purchasers require that their suppliers retain control of the raw ground beef until the test results are known. These purchasers reject raw boneless or ground beef contaminated with E. coli O157:H7. One purchaser also requires suppliers to test boneless beef for pathogens that indicate whether it was produced under sanitary conditions. This purchaser said it used the results of such tests, along with other information, to evaluate the performance of its suppliers, as the commodity program does. The one purchaser that had not developed specifications for the sampling and testing of raw boneless or ground beef relied on federal regulatory requirements, which include limits for E. coli O157:H7 and Salmonella. While it lacked such specifications for its suppliers, this purchaser may occasionally test its raw ground beef for microbial contamination. In 2010, an expert committee convened by the National Research Council at the request of USDA’s commodity program found that the scientific basis of the program’s 2010 revisions to its purchasing specifications for raw ground beef is unclear. In its report, the committee noted that some specifications were based on industry practices, but it could not determine the scientific basis of the industry practices. Further, it noted that other specifications appeared to have been based on information gathered through informal, ad hoc expert consultation, a method the committee deemed to be the least preferred form of evidence for developing specifications. Nevertheless, the committee found that a lack of reported outbreaks in recent years caused by either Salmonella or E. coli O157:H7 associated with raw ground beef purchased by the commodity program strongly suggested that the program’s purchasing specifications have been protective of public health. The committee did, however, recommend that the commodity program develop a systematic, transparent, and auditable system for modifying, reviewing, updating, and justifying science-based purchasing specifications for raw ground beef. The committee was also asked by USDA to compare the commodity program’s purchasing specifications to those used by other large purchasers of raw ground beef. Accordingly, the committee reviewed the purchasing specifications for raw ground beef used by 24 large corporate purchasers and found considerable variation with regard to acceptable levels of microbes. Specifically, the committee found substantial differences among the 24 purchasers in their criteria for bacteria that indicate the extent to which production conditions are sanitary, such as generic E. coli, as well as for Salmonella, Listeria monocytogenes, and E. coli O157:H7. The committee attributed the variations, in part, to the intended use of the raw ground beef. For example, specifications for raw ground beef distributed in frozen form may need to differ from purchasing specifications designed to improve the shelf life of fresh ground beef. According to its report, because the committee lacked information on the scientific basis for the corporate purchasing specifications, it could not directly compare the commodity program’s specifications with those of the corporate purchasers. The commodity program revised its purchasing specifications for raw ground beef in 2010 in response to concerns expressed in the media that the program’s existing specifications were not as stringent as those of large-scale purchasers of raw ground beef in the corporate sector, such as quick-service restaurants. While all school districts must follow certain food safety practices to participate in federally funded school meal programs, school districts we interviewed have also implemented a number of additional food safety practices. For example, some of these school districts have established purchasing specifications related to microbial contamination and have limited the kinds of foods purchased because of food safety concerns related to staff training and the adequacy of their facilities. To participate in federally funded school meal programs, federal regulations require all school districts to, among other things, develop written food safety plans and obtain food safety inspections of their schools. Specifically, each school district must implement a food safety plan that complies with USDA regulations. USDA publishes guidance to help schools develop plans that identify and mitigate food safety hazards related to preparing, storing, and serving school meals. These plans address such things as employee hand washing, proper heating and cooling methods, documentation of food temperatures, quality assurance steps, corrective actions, and record keeping. During reviews occurring every 5 years, state officials, in collaboration with USDA regional officials, are responsible for verifying school districts’ compliance with this requirement. Nevertheless, although they believe compliance is high, USDA officials said that information on compliance with this requirement is not collected at the national level, although it is collected at the state level. These officials added that USDA and state officials work with school districts not in compliance to correct any deficiencies. All 18 school districts we interviewed provided us documentation of their food safety plans. (For a list of the school districts in our sample, see app. I.) In addition, to help schools identify and correct immediate or persistent food safety problems, schools in each district must be inspected by relevant state or local health officials at least twice during each school year. According to the most recent data available from USDA, about 77 percent of schools in the United States met or exceeded this requirement during the 2009-2010 school year. The percentage of schools that meet the requirement for two inspections annually has increased from about 58 percent since the 2005-2006 school year, when two inspections were first required. Nevertheless, according to USDA data, about one in five schools still do not meet the requirement. Although USDA officials reported that they stress the importance of the inspections and encourage states to provide them, schools that do not meet the requirement are not penalized. In three of the school districts in our sample, all schools had received the required two inspections during the 2009-2010 school year; the level of compliance with the requirement varied among the other school districts. Overall, 60 percent of the schools in the 18 school districts in our sample received two or more inspections during the 2009-2010 school year. However, in one large urban school district, fewer than 1 percent of the schools received two inspections. When that district is excluded from the calculation, 77 percent of schools in the remaining 17 districts met or exceeded the requirement for two annual inspections. According to USDA data, reasons cited by schools for not meeting the requirement include insufficient staff or funding resources at state and local health departments to conduct the inspections, the need for these departments to conduct higher priority work, and the lack of inspectors in small towns and rural areas. Although a few of the school districts we interviewed mentioned reasons similar to these, officials in nine districts we interviewed pointed to two additional issues. First, in five of the districts, at least some of the schools that did not receive two inspections were sites without kitchens, where food is delivered from kitchens at other schools. Such sites had no kitchen facilities for the local health department to inspect. According to USDA officials, the agency reminds states each year that inspections are required for food preparation and service areas in schools. Despite these reminders, we found that state officials take different approaches to these sites in their annual reporting of school inspections to USDA. For example, officials for one state include such sites as not receiving required inspections, while another state exempts these schools from inspections and does not include them in its annual report to USDA. While federal regulations state that schools must obtain a minimum of two food safety inspections during each school year, they do not make a distinction between schools with or without kitchen facilities. Furthermore, USDA has not issued guidance to states and school districts that specifically addresses whether sites that do not prepare food are subject to the inspection requirement and whether states may exempt from inspections schools that do not prepare food. Second, seven school districts we interviewed, including three of the ones that did not receive inspections at some sites that lacked kitchens, said that they had to pay local health departments for inspections, which takes funds away from other parts of districts’ food service budgets. Officials in one of these districts said that, although their schools are entitled to receive one inspection per year free of charge, the district would have to pay the county for a second inspection; as a result, most of the schools in this district had received only one inspection. Fees paid by school districts for the two annual inspections ranged from $75 to $618 per school site. Officials in one large urban district estimated they spent approximately $65,000 on inspection fees in the 2009-2010 school year. In addition to the steps school districts take to meet federal requirements, all of the school districts we contacted had implemented other steps to help ensure the safety of the meals they served. These steps include establishing purchasing specifications related to microbial contamination and food safety, considering food safety in deciding which foods to order, and other practices related to inspections and use of technology. We selected our nonprobability sample of 18 school districts to include districts more likely to have developed purchasing specifications and other food safety practices because of their size, prior experience with foodborne illnesses, and other factors. Several of the school districts in our sample have established their own microbial purchasing specifications for the food items they purchase in the commercial marketplace that are more stringent than current federal regulatory requirements. Overall, 10 of the 18 school districts we interviewed had developed purchasing specifications related to microbial contamination or, more generally, food safety. These districts included 6 large urban school districts and 4 smaller urban and suburban districts; 2 of these districts participate in food-buying cooperatives with other districts. Five districts’ purchasing specifications identified specific pathogens that the districts ask their suppliers to test their food for, along with acceptable limits of each. For example, 1 large urban school district requires that all frozen fully cooked meat and poultry and all ready-to-eat meat and poultry products it buys commercially be tested for certain pathogens, including Clostridium perfringens, Listeria, Salmonella, Shigella, and Staphylococcus aureus. The district rejects any products that exceed its thresholds for the presence of these and other microbes. The other 5 school districts have implemented purchasing specifications related more broadly to food safety. For example, 4 of these districts’ specifications require their suppliers to have in place plans designed to reduce or eliminate microbial contamination. In addition, 5 of these 10 districts’ purchasing specifications described the districts’ right to send suppliers’ products for additional microbial testing, although these clauses often listed neither specific pathogens to be tested for nor thresholds. Despite some districts having taken such additional steps, none of the state officials and few of the district officials we interviewed were aware that, for seven products, the commodity program’s purchasing specifications related to microbial contamination are more stringent than federal regulatory requirements for the same foods in the commercial marketplace. Among the officials in the four school districts that had some awareness of these differences, officials in two districts said they learned of the differences through media stories about the commodity program’s specifications for raw ground beef. Officials in nine of the school districts we interviewed said that greater knowledge of these differences might affect their future purchasing decisions. More specifically, they said that they could use this knowledge to make more informed choices about which foods to purchase from the commodity program and which to purchase from the commercial marketplace. For example, one district official said the information would have an impact, although it would have to be presented in context and in a way that district officials could easily understand it. In 2003, we recommended that USDA’s commodity program highlight on its Web page the more-stringent product safety specifications it uses when purchasing foods it provides to schools, since this would help schools ensure that the food they purchase is safe. USDA has not implemented this recommendation. While USDA has set up a Web site that includes links to online copies of the commodity program’s purchasing specifications and related documents, USDA has not made clear that its purchasing specifications related to microbial contamination for seven commodity foods are more stringent than federal regulatory requirements for the same foods in the commercial marketplace. Although factors such as cost, nutrition, and quality also influence their purchasing decisions, officials for several school districts we interviewed limit the kinds of meat and produce they buy because of concerns about microbial contamination and food safety, including concerns about their own staff’s training and the adequacy of their facilities. Specifically, 9 of the 18 school districts in our sample have discontinued buying raw meat— such as ground beef, chicken, or turkey—for their school meals. Each of these districts said they purchase only precooked or processed meat products, whether through the commodity program or in the commercial marketplace. For example, 3 large urban school districts do not purchase raw meat because they cannot ensure that the kitchen staff at the many sites in their districts can handle raw meat safely and cook it to an internal temperature that would kill pathogens. All of the school districts we interviewed reported that they trained food service staff on food safety. Nevertheless, officials in 8 of the 9 districts that no longer purchase raw meat attributed that decision, at least in part, to concerns about their staff, including staff turnover and qualifications. In addition to factors related to staff, officials in 5 districts cited concerns about the adequacy of kitchen facilities as a reason to eliminate the purchase of raw meat. For example, officials in a large urban district said that some of its schools were over 100 years old and therefore lacked modern cooking facilities; in some of its schools, the “kitchen” may be an old ball closet with ovens in it. Without adequate staff and facilities, officials in these districts said it was safer to purchase cooked or processed meat. Although half the districts we interviewed do not buy raw meat, the other half do. Officials in many of these nine school districts told us they buy raw meat because it costs less than precooked products, and their staff and facilities are adequate and able to handle it. For example, the director of one midsize urban school district’s food service department indicated that the district has tended to buy more raw meat in recent years, because it is less expensive than precooked products, and the district has the facilities to cook and cool these products safely. While these nine districts buy raw meat, four of them limit its handling in some way, such as handling it only in a small number of appropriately equipped facilities. For example, one small urban school district receives raw ground beef at only one of its kitchen facilities, where it is cooked in one location in that kitchen by two staff members who have been specifically trained to handle and prepare it safely. Moreover, we found that about 30 percent (39 million pounds) of all ground beef sent to schools by USDA’s commodity program in the 2009- 2010 school year was uncooked. Schools in every state that receives food from the commodity program received this raw ground beef. The remainder of the ground beef from the commodity program was cooked before being sent to schools. In addition, none of the school districts we contacted reported purchasing irradiated food, such as ground beef. Largely, school officials said they did not buy irradiated food because parents did not want it served to their children. Officials of USDA’s commodity program said that, while the program continues to offer irradiated beef products, school districts have not ordered any such products in several years. We have reported that irradiation kills 99.9 percent of the pathogens on food. Many of the officials in the school districts we interviewed raised concerns about the safety of fresh produce that, in some cases, were similar to those raised about raw meat. While all 18 of the districts in our sample reported buying fresh produce, officials in 12 districts raised concerns about its safety. For example, 1 suburban school district stopped purchasing bagged lettuce after some of its students were sickened by it in 2010 during a multistate outbreak of foodborne illness. While the district now purchases heads of lettuce and has its own staff wash and chop it, its food service director acknowledged that the lettuce is now vulnerable to mishandling by the district’s own staff. Officials in another school district said that handling fresh produce safely is a concern because of difficulty maintaining it at or below 41 degrees in its facilities. These officials said that if the district cannot maintain produce at a safe temperature, it might have to throw away any leftover salad, which could make fresh salads too expensive to serve. Nevertheless, 8 of the school districts in our sample indicated that the recent trend in their district has been toward buying more fresh produce. For example, 1 large urban school district indicated that it was expanding its purchases of fresh produce and the number of salad bars in its schools. In addition, 10 of the school districts we interviewed said they obtained at least some produce through the commodity program from DOD. While the remaining 8 school districts said they purchase all of their fresh produce in the commercial marketplace, none attributed this practice to concerns about the safety of produce from DOD. In addition to establishing purchasing specifications related to microbial contamination and limiting the kinds of foods they purchase, school districts employ a variety of other practices to help ensure the safety of the food they purchase, including: Internal inspections: Ten school districts reported that the district’s own officials, usually managers, inspect individual schools’ kitchen facilities on a periodic basis. For example, one large urban district reported that its officials had been trained by county health inspectors to conduct kitchen inspections, and these officials did so throughout the district. Visiting vendors’ facilities: Ten school districts reported that the districts’ own officials visited food vendors’ facilities before or during contract periods to learn more about the vendors’ food safety procedures, among other things. For example, one district’s food service director reported visiting the facilities of two of its suppliers, which helped the director understand the vendors’ food production processes and their standards. Technological procedures: Two school districts reported using technology to help monitor or improve food safety in school kitchens. For example, officials in one district centrally monitored the temperatures in all of the district’s walk-in freezers and coolers, as well as the temperature of food as it was being prepared in the district’s kitchens. For seven of the commodity foods it provides to schools, USDA’s commodity program has developed purchasing specifications related to microbial contamination that are more stringent than USDA’s and FDA’s regulatory requirements for these same foods in the commercial marketplace. The commodity program has developed such specifications because it serves populations at increased risk of foodborne illnesses and their more serious complications. Nevertheless, questions remain regarding whether the program has identified the foods and pathogens that present the highest risks to the populations the program serves. Recent outbreaks involving, among other things, various fresh produce items and non-O157 strains of toxin-producing E. coli, have revealed risks not addressed by the commodity program’s specifications. More broadly, questions remain regarding whether the process by which the commodity program develops these specifications is sufficiently systematic and transparent. Program officials told us they selected products for more- stringent specifications for the seven commodity foods based on their views of the safety risk associated with different types of food; that they developed these specifications through informal consultation with a variety of groups and individuals; and that they did not document this process. Moreover, although the commodity program undertook a very public revision of its purchasing specifications for ground beef in 2010, a committee of the National Research Council found that the new specifications were developed through informal, ad hoc consultations and that their scientific basis was unclear. Development of specifications for foods offered by the program other than ground beef have not undergone a similar level of review. In addition, although all 18 of the school districts we interviewed considered food safety as part of their purchasing decisions, few were aware of the commodity program’s more-stringent specifications related to microbial contamination for the seven foods. As a result, district officials lack information that could help them make more informed decisions about whether to purchase food from the commodity program or the commercial marketplace. Furthermore, without more specific guidance from the commodity program as to how states and school districts should count schools that do not obtain required health inspections because they do not prepare food on site, the program may not have accurate information on the extent to which kitchens that prepare school meals meet state and local food safety requirements. To strengthen USDA’s oversight of the safety of food purchased by its commodity program and served in federal school meal programs, we recommend that the Secretary of Agriculture instruct the commodity program to take the following three actions: develop a systematic and transparent process to determine whether foods offered by the program require more-stringent specifications related to microbial contamination, including steps to: identify pathogens, strains of pathogens, or other foods that merit more-stringent specifications; document the scientific basis used to develop the specifications; and review the specifications on a periodic basis; share information with school districts in a more explicit form regarding the foods covered by more-stringent purchasing specifications related to microbial contamination to enable districts to make more informed choices; and issue more specific guidance to states and school districts regarding the applicability of the regulatory requirement for food safety inspections to schools that do not prepare food. We provided a draft of this report to USDA, the Department of Health and Human Services (HHS), and DOD for review and comment. The departments did not provide official written comments to include in our report. However, in an e-mail received April 7, 2011, the USDA liaison stated that USDA generally agreed with all of our recommendations. USDA and HHS also provided technical comments. We incorporated these technical comments into the report, as appropriate. DOD did not have any comments on the report. We are sending copies of this report to the appropriate congressional committees; the Secretaries of Agriculture, Defense, and Health and Human Services; and other interested parties. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix II. The overall objective of this review was to assess the U.S. Department of Agriculture’s (USDA) standards and procedures to ensure the safety of food in school meal programs. Specifically, we assessed (1) the extent to which federal purchasing specifications related to microbial contamination for food in the commodity program differ from federal regulations for the same foods available in the commercial marketplace; (2) the extent to which the commodity program’s purchasing specifications related to microbial contamination for raw ground beef differ from those imposed by large federal and private-sector purchasers; and (3) examples of standards and practices that exist at the state and school district level to help ensure that food procured by schools is not contaminated by pathogens. To address the extent to which federal purchasing specifications related to microbial contamination for food in the commodity program differ from federal regulations for the same foods available in the commercial marketplace, we reviewed applicable laws and regulations. We also interviewed officials in both USDA’s commodity program and its meat and poultry regulatory program, and gathered documentation related to purchasing specifications and regulatory requirements. To determine the purchasing specifications applied by the Department of Defense (DOD) to the fresh produce it purchases for distribution to school districts through the commodity program, we interviewed DOD officials and gathered related documentation. We also gathered information on regulatory requirements for fresh produce and other foods not regulated by USDA through discussions with officials from the Food and Drug Administration (FDA). FDA officials also provided us related documentation, including agency guidance for good agricultural, manufacturing, and handling practices. We then compared the purchasing specifications used by the commodity program and by DOD with federal regulatory requirements for food sold in the commercial marketplace. In addition, we discussed these specifications and regulatory requirements with knowledgeable groups and individuals—including representatives of industry associations and consumer groups. To learn more about the extent to which outbreaks of foodborne illness are associated with schools, we analyzed information from the Centers for Disease Control and Prevention’s (CDC) Foodborne Disease Outbreak Surveillance System, which collects information reported to CDC by state and local health departments on outbreaks of foodborne illness. Because this information system relies on voluntarily reported outbreaks, and reporting varies greatly across states, it is not an adequate way to determine the total number of foodborne illnesses or the actual extent of outbreaks associated with schools. CDC defines such an outbreak as two or more similar illnesses that result from the consumption of a common food. We took a number of steps to assess the reliability of this data, including interviewing CDC officials regarding how the data are collected and entered, as well as electronic testing of the data. As a result of these steps, we determined that the data were sufficiently reliable for the purposes of our review. To assess the extent to which the commodity program’s purchasing specifications related to microbial contamination for raw ground beef differ from those imposed by other large federal and private-sector purchasers, we analyzed the commodity program’s purchasing specifications for raw boneless beef and ground beef. We also conducted site visits to three beef slaughter and processing facilities to gather information on the slaughter and grinding process for ground beef, as well as on these suppliers’ perspectives on the differences in the specifications used by the commodity program and private-sector purchasers. To gather information on the specifications used by other large purchasers of raw ground beef, we selected a nonprobability sample of private-sector companies based on input from interviews with federal officials, industry representatives, and consumer advocates. Our sample included two quick- service restaurant chains, two chains of food retailers, one food distributor, and one food service management company. We also selected DOD as a large federal purchaser of ground beef. We interviewed officials from each of these purchasers and gathered documentation regarding their purchasing specifications for boneless beef and ground beef. In some cases, officials for private-sector companies declined to provide detailed information on one or more aspects of their specifications. We then compared the specifications related to microbial contamination of these seven large purchasers with those of the commodity program. Specifically, we compared purchasers’ specifications related to the slaughter process, their oversight of beef suppliers and grinders, and their microbial testing practices. Additionally, to gather information on the scientific basis of the commodity program’s purchasing specifications for ground beef, we reviewed the findings of a National Research Council report issued in November 2010. To identify examples of standards and practices used at the state and school district level to help ensure that food procured by schools is not contaminated by pathogens, we selected a nonprobability sample of five states and 18 school districts to review. We selected this nonprobability sample of school districts to include districts more likely to have developed purchasing specifications and other food safety practices, based on input from state and school district officials. To select this sample, we searched media reports of foodborne outbreaks involving schools in selected states over the past 10 years. We also considered factors such as geographic dispersion and differences in the state agency responsible for the commodity program. Based on these and other factors, we selected five states: California, Nebraska, New York, Texas, and Virginia. We then selected a nonprobability sample of school districts in each state. In addition to input from state officials, we considered each district’s size, indications of a prior experience with foodborne illnesses, and other factors, including whether a district used a food service management company or participated in a food-buying cooperative. We either visited or interviewed by phone officials in 18 school districts across the five states, including three that had been tied to foodborne outbreaks by media reports, four that were operated by or consulted with food service management companies, and six that participated in food-buying cooperatives. We selected school districts for the following localities: in California, Berkeley, Burbank, Los Angeles, San Diego, San Jose, San Marcos, Solana Beach, and Vallejo; in Nebraska, Elkhorn, Lincoln, and Omaha; in New York, Dix Hills, New York, and Wappingers Falls; in Texas, Dallas and Houston; and in Virginia, Alexandria and Arlington. We also gathered documentation from these states and school districts, including copies of food safety plans and purchasing specifications, among other things. We used the interviews and documentation to identify food safety practices used by school districts, including the extent to which their activities were consistent with federal regulatory requirements and practices the districts themselves had developed. The results from these states and districts cannot be generalized to other states and districts. We conducted this performance audit between February 2010 and May 2011, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, Cheryl A. Williams, Assistant Director; Kevin Bray; Ellen Chu; G. Michael Mikota; Justin L. Monroe; Nico Sloss; and Amy Ward-Meier made key contributions to this report. Also contributing to this report were Mitchell Karpman and Anne Rhodes-Kline. | Through its commodity program, the U.S. Department of Agriculture (USDA) provides commodity foods at no cost to schools taking part in the national school meals programs. Commodities include raw ground beef, cheese, poultry, and fresh produce. Like federal food safety agencies, the commodity program has taken steps designed to reduce microbial contamination that can result in severe illness. GAO was asked to review (1) the extent to which the program's purchasing specifications related to microbial contamination differ from federal regulations, (2) the extent to which specifications for raw ground beef differ from those imposed by some other large purchasers, and (3) examples of schools' practices to help ensure that food is not contaminated. GAO compared the program's purchasing specifications to federal regulations for food sold commercially, gathered information from seven large purchasers of ground beef, and interviewed officials in 18 school districts in five states, selected in part because of their purchasing practices. For 7 of the approximately 180 commodity foods offered to schools, USDA's commodity program has established purchasing specifications with respect to microbial contamination that are more stringent than the federal regulations for the same foods in the commercial marketplace. For example, the commodity program will not purchase ground beef that tests positive for Salmonella bacteria, while federal regulations for commercially available ground beef tolerate the presence of a certain amount of Salmonella. Program officials told GAO that more-stringent specifications are needed for certain foods they purchase because they go to populations, such as very young children, at a higher risk for serious complications from foodborne illnesses. However, the program has not developed more-stringent specifications for some pathogens and foods that have been associated with foodborne illness, such as raw, whole chickens cut into eight pieces that the program provides to schools. Program officials told GAO they selected products for more-stringent specifications based on their views of the safety risk associated with different types of food; developed these specifications through informal consultation with a variety of groups; and did not document the process they used. The commodity program's purchasing specifications related to microbial contamination for raw ground beef at various processing stages are generally similar to those of some other large purchasers. The specifications used by both the commodity program and these large purchasers are more stringent than federal regulations. USDA's commodity program has several purchasing specifications related to microbial contamination for raw ground beef production, process oversight, and testing. For example, the program requires beef suppliers to take actions to reduce the level of pathogens at least twice while beef carcasses are processed. Some large purchasers of raw ground beef have purchasing specifications similar to the commodity program, although they differ in certain details. For example, of the seven large purchasers that GAO interviewed, five said they require their beef suppliers to take between two and seven actions to reduce pathogen levels on beef carcasses. While all school districts must follow certain food safety practices to participate in federally funded school meal programs, school districts that GAO interviewed have also implemented a number of additional food safety practices. Federal regulations require school districts to develop written food safety plans and to obtain food safety inspections of their schools, among other things. In addition, some of the school districts GAO interviewed have established purchasing specifications related to microbial contamination or food safety for food they purchase in the commercial marketplace, among other things. Nevertheless, few of the district officials GAO interviewed were aware that the commodity program's purchasing specifications for seven products are more stringent than federal regulatory requirements. Officials from half of the districts GAO interviewed said that greater knowledge of these differences would affect their future purchasing decisions by enabling them to make more informed choices. GAO recommends, among other things, that USDA strengthen its oversight of food purchased by its commodity program, by establishing a more systematic and transparent process to determine whether additional specifications should be developed related to microbial contamination. USDA generally agreed with GAO's recommendations and provided technical comments. |
For fiscal year 2002, Inspectors General and their contract auditors reported that the systems for 19 of the 24 CFO Act agencies did not comply substantially with at least one of the FFMIA requirements—federal financial management systems requirements, applicable federal accounting standards, or the SGL. Auditors’ assessments of financial systems’ compliance with FFMIA for 3 agencies—the Department of Labor (DOL), Environmental Protection Agency (EPA), and the National Science Foundation (NSF)—changed from fiscal years 2001 to 2002. For fiscal year 2002, the auditors for DOL concluded that its systems were not in substantial compliance with the managerial cost standard and thus were not in compliance with FFMIA. Auditors for EPA and NSF found the agencies’ respective systems to be in substantial compliance, a change from the fiscal year 2001 assessments. As we have testified previously,while the number of agencies receiving clean opinions increased over the past 6 years from 11 in fiscal year 1997 to 21 for fiscal year 2002, the number of agencies reported to have systems that lacked substantial compliance with FFMIA has remained steady. While the increase in unqualified opinions is noteworthy, a more important barometer of financial systems’ capability and reliability is that the number of agencies for which auditors provided negative assurance of FFMIA compliance has remained relatively constant throughout this same period. In our view, this has led to an expectation gap. When more agencies receive clean opinions, expectations are raised that the government has sound financial management and can produce reliable, useful, and timely information on demand throughout the year, whereas FFMIA assessments offer a different perspective. For agencies equipped with modern, fully integrated financial management systems, preparation of financial statements would be more routine and much less costly. Auditors for the remaining five agencies—the Department of Energy, EPA, the General Services Administration (GSA), NSF, and the Social Security Administration (SSA)—provided negative assurance in reporting on FFMIA compliance for fiscal year 2002. In their respective reports, they included language stating that while they did not opine as to FFMIA compliance, nothing came to their attention during the course of their planned procedures indicating that these agencies’ financial management systems did not meet FFMIA requirements. If readers do not understand the concept of negative assurance, they may have gained an incorrect impression that these systems have been fully tested by the auditors and found to be substantially compliant. Because the act requires auditors to “report whether” agency systems are substantially compliant, we believe the auditor needs to provide positive assurance, which would be a definitive statement as to whether agency financial management systems substantially comply with FFMIA, as required under the statute. This is what we will do for the financial statement audits we perform when reporting that an entity’s financial management systems were in substantial compliance. To provide positive assurance, auditors need to consider many other aspects of financial management systems than those applicable to the purposes of rendering an opinion on the financial statements. Based on our review of the fiscal year 2002 audit reports for the 19 agencies reported to have systems not in substantial compliance with one or more of FFMIA’s three requirements, we identified six primary problems affecting FFMIA noncompliance: nonintegrated financial management systems, lack of accurate and timely recording of financial information, noncompliance with the SGL, lack of adherence to federal accounting standards, and weak security controls over information systems. The relative frequency of these problemsat the 19 agencies reported as having noncompliant systems is shown in figure 1. In addition, we caution that the occurrence of problems in a particular category may be even greater than auditors’ reports of FFMIA noncompliance would suggest because auditors may not have included all problems in their reports. FFMIA testing may not be comprehensive and other problems may exist that were not identified and reported. For example, at some agencies, the problems are so serious and well known that the auditor can readily determine that the systems are not substantially compliant without examining every facet of FFMIA compliance. The CFO Act calls for agencies to develop and maintain an integrated accounting and financial management system that complies with federal systems requirements and provides for (1) complete, reliable, consistent, and timely information that is responsive to the financial information needs of the agency and facilitates the systematic measurement of performance, (2) the development and reporting of cost management information, and (3) the integration of accounting and budgeting information. In this regard, OMB Circular A-127, Financial Management Systems, requires agencies to establish and maintain a single integrated financial management system that conforms with functional requirements published by JFMIP. An integrated financial system coordinates a number of functions to improve overall efficiency and control. For example, integrated financial management systems are designed to avoid unnecessary duplication of transaction entry and greatly lessen reconciliation issues. With integrated systems, transactions are entered only once and are available for multiple purposes or functions. Moreover, with an integrated financial management system, an agency is more likely to have reliable, useful, and timely financial information for day-to-day decision making as well as external reporting. Agencies that do not have integrated financial management systems typically must expend major effort and resources, including in some cases hiring external consultants, to develop information that their systems should be able to provide on a daily or recurring basis. In addition, opportunities for errors are increased when agencies’ systems are not integrated. Agencies with nonintegrated financial systems are more likely to be required to devote more resources to collecting information than those with integrated systems. Auditors frequently mentioned the lack of modern, integrated financial management systems in their fiscal year 2002 audit reports. As shown in figure 1, auditors for 12 of the 19 agencies with noncompliant systems reported this as a problem. For example, auditors for the Department of Transportation (DOT) reported that its major agencies still use the Departmental Accounting and Financial Information System (DAFIS), the existing departmentwide accounting system and cannot produce auditable financial statements based on the information in DAFIS. For example, DOT’s IG reported that DOT made about 860 adjustments outside of DAFIS totaling $51 billion in order to prepare the financial statements.DOT’s IG also reported that there were problems linking some information between DAFIS and the Federal Highway Administration’s Fiscal Management Information System (FMIS). DOT uses FMIS to record initial obligations for federal aid grants to states. However, due to problems resulting from upgrades and changes made to the FMIS system, all obligations are not electronically transferred from FMIS to DAFIS. As of September 30, 2002, valid obligations of about $388 million were understated. Moreover, problems linking information also existed between Delphi, DOT’s new financial management system, and the Federal Transit Administration’s (FTA) financial feeder systems that prevented FTA from electronically processing about $350 million in payments related to its Electronic Clearing House Operation. These transactions had to be manually processed into Delphi. What is important here is that the information developed to prepare auditable annual financial statements is not available on an ongoing basis for day-to-day management of DOT’s programs and operations. As we have reported, cultural resistance to change, military service parochialism, and stovepiped operations have played a significant role in impeding previous attempts to implement broad-based reforms at the Department of Defense (DOD). The department’s stovepiped approach is most evident in its current financial management systems environment, which DOD recently estimated to include approximately 2,300 systems and systems development projects—many of which were developed in piecemeal fashion and evolved to accommodate different organizations, each with its own policies and procedures. As DOD management has acknowledged, the department’s current financial environment is comprised of many discrete systems characterized by poor integration and minimal data standardization and prevents managers from making more timely and cost-effective decisions. A reconciliation process, even if performed manually, is a valuable part of a sound financial management system. In fact, the less integrated the financial management system, the greater the need for adequate reconciliations because data are accumulated from various sources. For example, the Department of Health and Human Services (HHS) IG reported that the department’s lack of an integrated financial management system continues to impair the ability of certain operating divisions to prepare timely information. Moreover, certain reconciliation processes were not adequately performed to ensure that differences were properly identified, researched, and resolved in a timely manner and that account balances were complete and accurate. Reconciliations are needed to ensure that data have been recorded properly between the various systems and manual records. The Comptroller General’s Standards for Internal Control in the Federal Government highlights reconciliation as a key control activity. As shown in figure 1, auditors for 11 of the 19 agencies with noncompliant systems reported that the agencies had reconciliation problems, including difficulty reconciling their fund balance with Treasury accounts with Treasury’s records. Treasury policy requires agencies to reconcile their accounting records with Treasury records monthly, which is comparable to individuals reconciling their checkbooks to their monthly bank statements. As we recently testified, DOD had at least $7.5 billion in unexplained differences between Treasury and DOD fund activity records. Many of these differences represent disbursements made and reported to Treasury that had not yet been properly matched to obligations and recorded in DOD accounting records. In addition to these unreconciled amounts, DOD identified and reported an additional $3.6 billion in payment recording errors. These include disbursements that DOD has specifically identified as containing erroneous or missing information and that cannot be properly recorded and charged against the correct, valid fund account. DOD records many of these payment problems in suspense accounts. While DOD made $1.6 billion in unsupported adjustments to its fund balances at the end of fiscal year 2002 to account for a portion of these payment recording errors, these adjustments did not resolve the related errors. Inadequate reconciliation procedures also complicate the identification and elimination of intragovernmental activity and balances, which is one of the principal reasons we continue to disclaim on the government’s consolidated financial statements. As we testified in April 2003, agencies had not reconciled intragovernmental activity and balances with their trading partners and, as a result, information reported to Treasury is not reliable. For several years, OMB and Treasury have required CFO Act agencies to reconcile selected intragovernmental activity and balances with their trading partners. However, a substantial number of CFO Act agencies did not perform such reconciliations for fiscal years 2002 and 2001, citing such reasons as (1) trading partners not providing needed data, (2) limitations and incompatibility of agency and trading partner systems, and (3) human resource issues. For both of these years, amounts reported for federal trading partners for certain intragovernmental accounts were significantly out of balance. Actions are being taken governmentwide under OMB’s leadership to address problems associated with intragovernmental activity and balances. Auditors for 17 agencies reported the lack of accurate and timely recording of financial information for fiscal year 2002 compared to the 14 agenciesfor which auditors noted similar problems in their 2001 reports. Accurate and timely recording of financial information is key to successful financial management. Timely recording of transactions can facilitate accurate reporting in agencies’ financial reports and other management reports that are used to guide managerial decision making. The Comptroller General’s Standards for Internal Control in the Federal Government states that transactions should be promptly recorded to maintain their relevance and value to management in controlling operations and making decisions. Untimely recording of transactions during the fiscal year can result in agencies making substantial efforts at fiscal year-end to perform extensive manual financial statement preparation efforts that are susceptible to error and increase the risk of misstatements. Gathering financial data only at year-end does not provide adequate time to analyze transactions or account balances. Further, it impedes management’s ability throughout the year to have timely and useful information for decision making. For example, auditors reported that, for fiscal year 2002, Department of Justice (Justice) components did not adjust the status of obligations on a quarterly basis as required, and as a result, extensive manual efforts had to be performed at year-end to correct the status of obligation records. This process of reviewing the status of obligations only at the end of the year increases the risk that errors will go undetected, does not provide managers with accurate information during the year for decision making, and results in misstatements in the financial statements. Implementing the SGL at the transaction level is one of the specific requirements of FFMIA. However, as shown in figure 1, auditors for 9 of the 19 noncompliant agencies reported that the agencies’ systems did not comply with SGL requirements. The SGL promotes consistency in financial transaction processing and reporting by providing a uniform chart of accounts and pro forma transactions. Use of the SGL also provides a basis for comparison at agency and governmentwide levels. These defined accounts and pro forma transactions are used to standardize the accumulation of agency financial information, as well as enhance financial control and support financial statement preparation and other external reporting. By not implementing the SGL, agencies are challenged to provide consistent financial information across their components and functions. As in previous years, the Department of Housing and Urban Development’s (HUD) auditors reported that the Federal Housing Administration’s (FHA) systems were noncompliant with the SGL for fiscal year 2002 because FHA must use several manual processing steps to convert its commercial accounts to SGL accounts.FHA’s 19 legacy insurance systems, which fed transactions to its commercial general ledger system, lacked the capabilities to process transactions in the SGL format. Therefore, FHA provided only consolidated summary-level data to HUD’s Central Accounting and Program System (HUDCAPS). As we reported, FHA used several manual processing steps to provide summary-level data, including the use of personal-computer-based software to convert the summary-level commercial accounts to government SGL, and transfer the balances to HUDCAPS. This process did not comply with JFMIP requirements that the core financial system provide for automated month- and year-end closing of SGL accounts and the roll-over of the SGL account balances. One of FFMIA’s requirements is that agencies’ financial management systems account for transactions in accordance with federal accounting standards. Agencies face significant challenges implementing these standards. As shown in figure 1, auditors for 13 of the 19 agencies with noncompliant systems reported that these agencies had problems complying with one or more federal accounting standards. Auditors reported that agencies are having problems implementing standards that have been in effect for some time, as well as standards that have been promulgated in the last few years. For example, auditors for three agencies—DOD, Justice, and the Federal Emergency Management Agency (FEMA)—reported weaknesses in compliance with Statement of Federal Financial Accounting Standards (SFFAS) No. 6, Accounting for Property, Plant, and Equipment, which became effective for fiscal year 1998. Auditors for DOD reported that DOD did not capture the correct acquisition date and cost of its property, plant, and equipment, due to system limitations. Therefore, DOD could not provide reliable information for reporting account balances and computing depreciation. Auditors for two agencies—HUD and Justice—reported weaknesses in compliance with SFFAS No. 7, Revenue and Other Financing Sources, which also became effective for fiscal year 1998. For example, auditors reported a material weakness for FHA’s budget execution and fund control. According to the auditors, FHA’s financial systems and processes are not capable of fully monitoring and controlling budgetary resources. Finally, auditors for three agencies—the Agency for International Development (AID), the National Aeronautics and Space Administration (NASA), and the Nuclear Regulatory Commission (NRC)—reported trouble with implementing SFFAS No. 10, Accounting for Internal Use Software, which became effective at the beginning of fiscal year 2001. For example, auditors reported that NASA’s policies and procedures do not specifically address purchasing software as part of a package of products and services. In their testing, NASA’s auditors identified errors for costs that were originally recorded as expenses, but instead should have been capitalized as assets. Managerial cost information is required by the CFO Act of 1990, and since 1998 by a federal accounting standard. Auditors for five agencies reported problems implementing SFFAS No. 4, Managerial Cost Accounting Concepts and Standards. For example, auditors for DOL reported that the department has not developed the capability to routinely report the cost of outputs used to manage program operations at the operating program and activity levels. Moreover, DOL does not use managerial cost information for purposes of performance measurement, planning, budgeting, or forecasting. At DOT, auditors stated that its agencies, other than the Federal Aviation Administration (FAA) and the U.S. Coast Guard, have begun to identify requirements for implementing cost accounting systems. DOT’s existing accounting system, DAFIS, does not have the capability to capture full costs, including direct and indirect costs assigned to DOT programs. The Secretary recently advised OMB that as the remaining DOT agencies migrate to Delphi, DOT’s new core financial system, Delphi will provide them with enhanced cost accounting capabilities. Managerial cost information is critical for implementing the PMA. According to the PMA, the accomplishment of the other four crosscutting initiatives will matter little without the integration of agency budgets with performance. Although the lack of a consistent information and reporting framework for performance, budgeting, and accounting may obscure how well government programs are performing as well as inhibit comparisons, no one presentation can meet all users’ needs. Any framework should support an understanding of the links between performance, budgeting, and accounting information measured and reported for different purposes. However, even the most meaningful links between performance results and resources consumed are only as good as the underlying data. Moreover, this link between resources consumed and performance results is necessary to make public-private competition decisions as part of competitive sourcing. Therefore, agencies must address long-standing problems within their financial systems. As agencies implement and upgrade their financial management systems, opportunities exist for developing cost management information as an integral part of these systems to provide important information that is timely, reliable, and useful. As we recently reported,DOD’s continuing inability to capture and report the full cost of its programs represents one of the most significant impediments facing the department. DOD does not have the systems and processes in place to capture the required cost information from the hundreds of millions of transactions it processes each year. Lacking complete and accurate overall life-cycle cost information for weapons systems impairs DOD’s and congressional decisionmakers’ ability to make fully informed decisions about which weapons, or how many, to buy. DOD has acknowledged that the lack of a cost accounting system is its largest impediment to controlling and managing weapon systems costs. Information security weaknesses are one of the frequently cited reasons for noncompliance with FFMIA and are a major concern for federal agencies and the general public. These weaknesses are placing enormous amounts of government assets at risk of inadvertent or deliberate misuse, financial information at risk of unauthorized modification or destruction, sensitive information at risk of inappropriate disclosure, and critical operations at risk of disruption. Auditors for all 19 of the agencies reported as noncompliant with FFMIA identified weaknesses in security controls over information systems. Unresolved information security weaknesses could adversely affect the ability of agencies to produce accurate data for decision making and financial reporting because such weaknesses could compromise the reliability and availability of data that are recorded in or transmitted by an agency’s financial management system. General controls are the policies, procedures, and technical controls that apply to all or a large segment of an entity’s information systems and help ensure their proper operation. The six major areas are (1) security program management, which provides the framework for ensuring that risks are understood and that effective controls are selected and properly implemented, (2) access controls, which ensure that only authorized individuals can read, alter, or delete data, (3) software development and change controls, which ensure that only authorized software programs are implemented, (4) segregation of duties, which reduces the risk that one individual can independently perform inappropriate actions without detection, (5) operating systems controls, which protect sensitive programs that support multiple applications from tampering and misuse, and (6) service continuity, which ensures that computer-dependent operations experience no significant disruption. As we discussed in our April 2003 testimony, our analyses of audit reports issued from October 2001 through October 2002 for 24 of the largest federal agenciescontinued to show significant weaknesses in federal computer systems that put critical operations and assets at risk. Weaknesses continued to be reported in each of the 24 agencies included in our review, and they covered all six major areas of general controls. Although our analyses showed that most agencies had significant weaknesses in these six control areas, weaknesses were most often cited for access controls and security program management. Since 1997, GAO has considered information security a governmentwide high-risk area. As shown by our work and work performed by the IGs, security program management continues to be a widespread problem. Concerned with reports of significant weaknesses in federal computer systems that make them vulnerable to attack, the Congress enacted Government Information Security Reform provisions (commonly known as GISRA) to reduce these risks and provide more effective oversight of federal information security. GISRA required agencies to implement an information security program that is founded on a continuing risk management cycle and largely incorporates existing security policies found in OMB Circular A-130, Management of Federal Information Resources. GISRA provided an overall framework for managing information security and established new annual review, independent evaluation, and reporting requirements to help ensure agency implementation and both OMB and congressional oversight. In its required fiscal year 2002 GISRA report to the Congress, OMB stated that the federal government had made significant strides in addressing serious and pervasive information technology security problems, but that more needed to be done, particularly to address both the governmentwide weaknesses identified in its fiscal year 2001 report to the Congress and new challenges. Also, OMB reported significant progress in agencies’ information technology security performance, primarily as indicated by quantitative governmentwide performance measures that OMB required agencies to disclose beginning with their fiscal year 2002 reports. These include measures such as the number of systems that have been assessed for risk, have an up-to-date security plan, and for which security controls have been tested. As discussed in our June 2003 testimony, the governmentwide weaknesses identified by OMB, as well as the limited progress in implementing key information security requirements, continue to emphasize that, overall, agencies are not effectively implementing and managing their information security programs. For example, of the 24 large federal agencies we reviewed, 11 reported that they had assessed risk for 90 to 100 percent of their systems for fiscal year 2002, but 8 reported that they had assessed risk for less than half of their systems. The information security program, evaluation, and reporting requirements established by GISRA have been permanently authorized and strengthened through the recently enacted Federal Information Security Management Act of 2002 (FISMA). In addition, FISMA provisions establish additional requirements that can assist the agencies in implementing effective information security programs, help ensure that agency systems incorporate appropriate controls, and provide information for administration and congressional oversight. These requirements include the designation and establishment of specific responsibilities for an agency senior information security officer, implementation of minimum information security requirements for agency information and information systems, and required agency reporting to the Congress. Agencies’ fiscal year 2003 FISMA reports, due to OMB in September 2003, should provide additional information on the status of agencies’ efforts to implement federal information security requirements. In addition, FISMA requires each agency to report any significant deficiency in an information security policy, procedure, or practice relating to financial management systems as an instance of a lack of substantial compliance under FFMIA. The continuing trend of noncompliance with FFMIA indicates the overall long-standing poor condition of agency financial systems. Correcting the systems problems is a difficult challenge for agencies because of the age and poor condition of their critical financial systems. Some of the federal government’s computer systems were originally designed and developed years ago and do not meet current systems requirements. These legacy systems cannot provide reliable financial information for key governmentwide initiatives, such as integrating budget and performance information. Across government, agencies have many efforts underway to implement or upgrade financial systems to alleviate long-standing weaknesses in financial management. As we recently reported, as of September 30, 2002, 17 agencies advised us that they were planning to or were in the process of implementing a new core financial system. Of these 17 agencies, 11 had selected certified software. The other 6 agencies have not reached the software selection phase of their acquisition process. Implementing a core financial system that has been certified does not guarantee that these agencies will have financial systems that are compliant with FFMIA. Certification of core financial systems and testing vendor COTS packages help ensure that financial management system requirements and the vendor software remain aligned. One critical factor affecting FFMIA compliance is the integration of the core financial system with the agency’s administrative and programmaticsystems and the validity and completeness of data from these systems. Other factors affecting a COTS core financial system’s ability to comply with FFMIA include how the software package works in the agency’s environment, whether any modifications or customizations have been made to the software, and the success of converting data from legacy systems to new systems. As of September 30, 2002, target implementation dates for 16 of the 17 agencies planning to implement new core financial systems ranged from fiscal years 2003 to 2008. One agency—DOD—had not yet determined its target date for full implementation. As shown in figure 2, 3 of the 16 agencies—Agriculture, GSA, and NASA—planned to complete implementation in fiscal year 2003. Three other agencies—SSA, Commerce, and DOT—planned to complete their implementations in fiscal year 2004. The Department of Energy established fiscal year 2005 as its target implementation date and 3 agencies—the departments of State and Veterans Affairs and AID—have targeted fiscal year 2006 for completion. Moreover, as shown in figure 2, 4 agencies—DOL, HHS, EPA, and HUD— have set fiscal year 2007 as their implementation target date. Finally, 2 agencies—the Departments of the Interior and Justice—projected fiscal year 2008 for completion of their core financial systems implementation. The remaining 7 of the 24 CFO Act agencies that advised us that they had no plans to implement a new system had either recently implemented a new core financial system in the last several years or were not planning to implement an agencywide core financial system. Five of the 7 agencies had fully implemented new core financial systems since the beginning of fiscal year 2001—including the Department of Education, NSF, NRC, the Small Business Administration (SBA), and OPM. FEMA had implemented a new system prior to fiscal year 2001. The remaining agency, Treasury, is not planning to implement an agencywide core financial system, but several of its subcomponent agencies—including the Internal Revenue Service and the Office of the Comptroller of the Currency—are in the process of implementing core financial system software packages. In their performance and accountability reports, management for some agencies stated that full implementation of these new systems will address their systems’ substantial noncompliance with FFMIA. However, as previously mentioned, implementation of a new core financial system may not resolve all of an agency’s financial management weaknesses because of the myriad of problems affecting agencies beyond their core financial systems. Nevertheless, it is imperative that agencies adopt leading practices to help ensure successful systems implementation. Implementing new financial management systems provides a foundation for improved financial management, including enhanced financial reporting capabilities that will help financial managers meet OMB’s accelerated reporting deadlines and make better financial management decisions due to more timely information. Successful implementation of financial management systems has been a continuous challenge for both federal agencies and private sector entities. In the past, federal agencies have experienced setbacks and delays in their implementation processes. These delays were caused by various factors, including a lack of executive- level involvement, poor communication between managers and users, and inadequate project planning. For example, our work at NASA has shown the need for consistent executive support, communication with all stakeholders, full identification of user requirements, and adequate planning. Recent work at NASA illustrates some of the specific problems agencies are encountering in implementing JFMIP-certified financial systems. In April 2000, NASA began its Integrated Financial Management Program (IFMP), its third attempt in recent years at modernizing financial processes and systems. NASA’s previous two efforts were eventually abandoned after a total of 12 years and a reported $180 million in spending. As part of this third effort, NASA recently implemented a new core financial module that was expected to provide financial and program managers with timely, consistent, and reliable cost and performance information for management decisions. However, earlier this year we reported that NASA’s core financial module was not being implemented to accommodate the information needed by program managers, cost estimators, and the Congress. The need for ongoing communication between project managers and systems users is crucial to any successful systems implementation project. Project managers need to understand the basic requirements of users, while users should be involved in the project’s planning process. NASA’s program officials chose to defer the development of some functions and related user requirements in order to expedite the systems implementation process. As a result, the new system will not meet the needs of some key users who will continue to rely on information from nonintegrated programs outside of the core financial module, or use other labor-intensive means, to capture the data they need to manage programs. NASA has also not followed certain other best practices for acquiring and implementing its new financial management system. NASA’s implementation plan calls for the system to be constructed using commercial components; however, NASA has not analyzed the interdependencies of the various subsystems. When constructing a system from commercial components, it is essential to understand the features and characteristics of each component in order to select compatible systems that can be integrated without having to build and maintain expensive interfaces. By acquiring components without first understanding their relationships, NASA has increased its risks of implementing a system that will not optimize mission performance, and that will cost more and take longer to implement than necessary. Private sector entities have also encountered a number of challenges and setbacks when implementing new systems. These challenges have included competition between internal organizational units, user resistance to the new systems, and frequent changes in management and to underlying corporate strategy. Entities are overcoming their challenges because better tools have been created to monitor and control progress and skilled project managers with better management processes are being used. The Standish Group International, Inc.(Standish Group) has reported that the number of successful systems implementation projects in the private sector is increasing. From 1994 to 2000, successful projects increased from 28,000 to 78,000. The Standish Group, through its research, has identified 10 project success factors. These factors include user involvement, executive support, experienced project managers, firm basic requirements, clear business objectives, minimized scope, standard software infrastructure, formal methodology, reliable estimates, and other. Also, according to the Standish Group, although no project requires all 10 factors to be successful,the more factors that are present in the project strategy, the higher the chance of a successful implementation. As discussed above, many of these factors have been challenges for both private sector and federal entities. By its very nature, the implementation of a new financial management system is a risky proposition. Therefore, it is crucial that federal departments and agencies follow accepted best practices and embrace as many of the key characteristics for successful implementation projects as possible to help minimize the risk of failed projects and result in systems that provide the necessary data for management’s needs. Our executive guide on creating value through world-class financial management describes 11 practices critical for establishing and maintaining sound financial operations. These practices include reengineering processes in conjunction with new technology. As a result, using commercial components such as COTS packages may require significant changes in the way federal departments conduct their business. According to the leading finance organizations that formed the basis for our executive guide, a key to successful implementation of COTS systems is reengineering business processes to fit the new software applications that are based on best practices. Moreover, OMB’s former Associate Director for Information Technology and e-Government has stated that “IT will not solve management problems—re-engineering processes will.” The conversion of data from an old system to a new system is also critical. In December 2002, JFMIP issued its White Paper: Financial Systems Data Conversion – Considerations. The purpose of this JFMIP document is to raise awareness of financial systems data conversion considerations to be addressed by financial management executives and project managers when planning or implementing a new financial management system. The JFMIP paper addresses (1) key considerations regarding data conversion and cutover to the new system, (2) best approaches for completing the data conversion and cutover, and (3) ways to reduce the risks associated with these approaches. As we have discussed, the goal of FFMIA is for agencies to have timely, reliable, and accurate information with which to make informed decisions and to ensure accountability on an ongoing basis. Figure 3 shows the three levels of the pyramid that result in the end goal, accountability and useful management information. The bottom level of the pyramid is the legislative framework that underpins the improvement of the general and financial management of the federal government. The second level shows the drivers that build on the legislative requirements and influence agency actions to meet these requirements. The three drivers are (1) congressional and other oversight, (2) the activities of the JFMIP Principals, and (3) the PMA. The third level of the pyramid represents the key success factors for accountability and meaningful management information—integrating core and feeder financial systems, producing reliable financial and performance data for reporting, and ensuring effective internal control. The result of these three levels, as shown at the top of the pyramid, is accountability and meaningful management information needed to assess and improve the government’s effectiveness, financial condition, and operating performance. The leadership demonstrated by the Congress has been an important catalyst to reforming financial management in the federal government. As previously discussed, the legislative framework provided by the CFO Act and FFMIA, among others, produced a solid foundation to stimulate needed change. For example, in November 2002, the Congress enacted the Accountability of Tax Dollars Act of 2002 to extend the financial statement audit requirements for CFO Act agencies to most executive branch agencies. In addition, there is value in sustained congressional interest in these issues, as demonstrated by hearings on federal financial management and reform held over the past several years. It will be key that the appropriations, budget, authorizing, and oversight committees hold agency top management accountable for resolving these problems and that they support improvement efforts. The continued attention by the Congress to these issues will be critical to sustaining momentum for financial management reform. Starting in August 2001, the JFMIP Principals have been meeting regularly to deliberate and reach agreements focused on financial management reform issues including (1) defining success measures for financial performance that go far beyond an unqualified audit opinion,(2) significantly accelerating financial statement reporting to improve timeliness for decision making, and (3) addressing difficult accounting and reporting issues, including impediments to an audit opinion on the federal government’s consolidated financial statements. This forum has provided an opportunity to reach decisions on key issues and undertake strategic activities that reinforce the effectiveness of groups such as the CFO Council in making progress toward federal financial management. In fiscal year 2002, the JFMIP Principals continued the series of these deliberative meetings. Continued personal involvement of the JFMIP Principals is critical to the full and successful implementation of federal financial management reform and to providing greater transparency and accountability in managing federal programs and resources. The PMA, being implemented by the administration as an agenda for improving the management and performance of the federal government, targets the most apparent deficiencies where the opportunity to improve performance is the greatest. While FFMIA implementation relates directly to the improved financial performance initiative, development and maintenance of FFMIA-compliant systems will also affect the implementation of the other four initiatives. Furthermore, the modernization of agency financial management systems, as envisioned by FFMIA, is critical to the success of all of these initiatives. Notably, OMB is developing a federal enterprise architecture that will affect the government’s ability to make significant progress across the PMA. For example, as part of the e-gov initiative, the number of federal payroll providers is being consolidated. Numerous agencies had targeted their payroll operations for costly modernization efforts. According to OMB, millions of dollars will be saved through shared resources and processes and by modernizing on a cross-agency and governmentwide basis. The administration’s implementation of its Program Assessment Rating Tool (PART) relates specifically to the PMA initiative of integration of budget and performance information. Reliable cost data, so crucial to effective FFMIA implementation, is critical not only for the improved financial performance and budget and performance integration initiatives, but also for competitive sourcing. For effective management, this cost information must not only be timely and reliable, but also both useful and used. The administration is using the Executive Branch Management Scorecard, based on governmentwide standards for success, to highlight agencies’ progress in achieving the improvements embodied in the PMA. OMB uses a grading system of red, yellow, and green to indicate agencies’ status in achieving the standards for success for each of the five crosscutting initiatives. It also assesses and reports progress using a similar “stoplight” system. The focus that the administration’s scorecard approach brings to improving management and performance, including financial management performance, is certainly a step in the right direction. The value of the scorecard is not in the scoring per se, but the degree to which the scores lead to sustained focus and demonstrable improvements. This will depend on continued efforts to assess progress and maintain accountability to ensure that the agencies are able to, in fact, improve their performance. It will be important that there be continuous rigor in the scoring process for this approach to be credible and effective in providing incentives that produce lasting results. Also, it is important to recognize that many of the challenges the federal government faces, such as improving financial management, are long-standing and complex, and will require sustained attention. The primary purpose of FFMIA is to ensure that agency financial management systems routinely provide reliable, useful, and timely financial information so that government leaders will be better positioned to invest resources, reduce costs, oversee programs, and hold agency managers accountable for the way they run programs. While many agencies are receiving unqualified opinions on their financial statements, auditor determinations of FFMIA compliance are lagging behind. To achieve the financial management improvements envisioned by the CFO Act, FFMIA, and more recently, the President’s Management Agenda, agencies need to modernize their financial systems to generate reliable, useful, and timely financial information throughout the year and at year-end. However, as we have discussed today, agencies are facing significant challenges in implementing new financial management systems. We are seeing a strong commitment from the President, the JFMIP Principals, and the Secretaries of major departments to ensure that these needed modernizations come to fruition. This commitment is critical to the success of the efforts under way as well as those still in a formative stage, and must be sustained. Finally, Mr. Chairman, the leadership demonstrated by you and the members of this Subcommittee is an important catalyst to reforming financial management in the federal government. Continued attention to these issues will be critical to sustaining momentum on financial management reforms. Mr. Chairman, this concludes my statement. I would be pleased to answer any questions you or other members of the Subcommittee may have at this time. For further information about this statement, please contact Kay L. Daly at (202) 512-9312. Other key contributors to this testimony include Sandra S. Silzer and Bridget A. Skjoldal. | The Federal Financial Management Improvement Act of 1996 (FFMIA) requires Chief Financial Officers (CFO) Act agencies to implement and maintain financial management systems that comply substantially with (1) federal financial management systems requirements, (2) federal accounting standards, and (3) the U.S. Government Standard General Ledger. Most federal agencies face long-standing challenges, which are discussed in greater detail in our mandated September 2003 report, Sustained Efforts Needed to Achieve FFMIA Accountability (GAO-03-1062). In light of these circumstances, Congress asked GAO to testify about recurring financial management systems problems and agencies' efforts to upgrade their systems. The results of the fiscal year 2002 FFMIA assessments performed by agency inspectors general or their contract auditors again show that the same types of problems continue to plague the financial management systems used by the CFO Act agencies. While much more severe at some agencies than others, the nature and severity of the problems indicate that overall, agency management lacks the full range of information needed for accountability, performance reporting, and decision making. Audit reports highlight six recurring problems that have been consistently reported for those agencies whose auditors reported noncompliant systems. Agencies have recognized the seriousness of the financial systems weaknesses, and have many efforts underway to implement or upgrade financial systems to alleviate long-standing problems. As of September 30, 2002, 17 CFO Act agencies advised us they were planning to or were in the process of implementing a new core financial system. It is imperative that agencies adopt leading practices, such as top management commitment and business process reengineering, to ensure successful systems implementation and to avoid complicating factors, such as poor communication and inadequate project planning, that have hampered some agencies' efforts in the past. Congressional oversight, the Joint Financial Management Improvement Program Principals, and the President's Management Agenda are driving forces behind several governmentwide efforts now underway to improve federal financial management. Continued attention by these key drivers is critical to sustaining agencies efforts to improve their financial management systems. |
The passage of PRWORA significantly limited the conditions under which legal permanent residents (LPR) are eligible for federal means-tested public benefits. LPRs are noncitizens who are legally permitted to live permanently in the United States and include those who obtain this status through the sponsorship of a family member, an employer-based preference, or are granted asylum or refugee status. Benefits affected by the law include TANF, which provides time-limited cash assistance and other support services; Medicaid, which provides health care assistance; SNAP, which provides food assistance; and SSI, which provides cash assistance to the aged, blind, and disabled. One particular subgroup of LPRs affected by the PRWORA changes to eligibility requirements for TANF, Medicaid, SNAP, and SSI includes those who obtained their LPR status through sponsorship by a relative who is also a LPR or U.S. citizen. Specifically, sponsored noncitizens who entered the United States on or after the passage of PRWORA on August 22, 1996, are generally only eligible for TANF, Medicaid, and SNAP after they have been in the United States for 5 years and eligible for SSI after they have been credited with 40 quarters of work. Certain veterans, active duty military, and their spouses and children, are exempted from these time- and work-related criteria, as are children under the age of 18 who apply for SNAP benefits. As with all benefit applicants, after sponsored noncitizens are determined to be qualified for benefits based on their noncitizen status, their eligibility is assessed on other criteria. For example, because these benefits are means-tested, applicants’ income and asset information is reviewed to determine whether they fall below the established financial eligibility threshold for each benefit. At the same time that it limited the conditions under which sponsored noncitizens are eligible for benefits, PRWORA, along with IIRIRA, strengthened the requirement that sponsors demonstrate their ability to provide financial support, if needed, to immigrating noncitizens. Consequently, since December 19, 1997, a sponsor must sign a legally binding affidavit of support as part of each sponsored noncitizen’s immigration application, which proves that the sponsor’s income is at least 125 percent of the federal poverty guidelines. The affidavit, as a formal contract between the sponsor and the noncitizen, also specifies that the sponsor will provide necessary support to maintain the noncitizen at an annual income of no less than 125 percent of the federal poverty guidelines while the affidavit is enforceable. The goal of the strengthened affidavit, as stated in PRWORA, is to ensure that sponsored noncitizens do not become public charges. Sponsor deeming is the attribution of the income and resources of the noncitizen’s sponsor (and that of the sponsor’s spouse, if any) as the applicant’s own income when determining benefit eligibility and benefit amounts. Specifically, when the sponsor’s income is deemed, it is added to that of the applicant, and that sum is compared with the benefit’s financial eligibility threshold. Therefore, if sponsor deeming occurs, benefits are only granted to sponsored noncitizens when both they and their sponsors are sufficiently low-income. Policies requiring sponsor deeming for noncitizen benefit applicants existed prior to 1996. However, PRWORA strengthened these deeming policies by generally extending the deeming period from 3 years to when the noncitizen naturalizes or has been credited with 40 quarters of work in the United States, or when the sponsor dies. Medicaid was also later added as a benefit subject to sponsor deeming, in addition to TANF, SSI, and SNAP. IIRIRA, which was passed shortly after PRWORA, specified two exceptions to sponsor deeming for battery and indigence. A battery exception to sponsor deeming can be made by a benefit agency if a sponsored noncitizen benefit applicant is a battered spouse, child, or parent or child of a battered person. An indigence exception to sponsor deeming can be made by a benefit agency if a sponsored noncitizen benefit applicant is unable to obtain food and shelter despite any assistance provided by the sponsor or others. These cases may include instances in which the sponsor has abandoned the noncitizen or is otherwise unable to provide the noncitizen with sufficient financial support. For any case in which the indigence exception is applied, the administering agency processing the application must send details related to the case, including the name of the sponsored noncitizen and the sponsor, to DHS. Both exceptions may be granted for 1 year, and agencies may extend the exceptions in certain circumstances after reassessing the case at that time. Sponsor repayment is the collection of benefit costs paid to a sponsored noncitizen by a benefit administering agency from that person’s sponsor. This provision, which originated with PRWORA, is designed to legally enforce the affidavit of support, which pledges a sponsor’s continuous financial support of the noncitizen. Accordingly, under the law, whenever a sponsored noncitizen receives federal means-tested public benefits, the administering agency that provided the benefit must request repayment of the benefit costs from the sponsor. If sponsors do not repay benefits after the agency’s request, the agency may also pursue repayment from the sponsor through court action, if it so chooses. These sponsor repayment provisions are separate from the traditional benefit recovery provisions used in cases of benefit fraud or payment error. Federal and state agencies have different roles in overseeing and administering the four federal means-tested public benefits. SSI is overseen and administered by federal SSA staff across the country. In contrast, TANF, Medicaid, and SNAP are generally overseen by the relevant federal agencies and administered by states, although specific roles vary by program (see table 1). For all four benefits, administering agencies are responsible for deeming sponsor income during the application process and pursuing sponsor repayment of benefits received by sponsored noncitizens. In the case of SSI, these administering agencies are SSA regional and field offices. For TANF, Medicaid, and SNAP, the administering agencies are state and local benefit offices. Sponsor deeming is implemented by staff at the administering agency as part of the benefit application process. While agencies vary to some degree in the methods used to process applications, the method used for noncitizen applicants for whom deeming applies generally involves several steps (see fig. 1). First, eligibility workers processing these cases typically review an applicant’s documents, such as an LPR or green card, to determine noncitizen status and verify the information they contain using DHS’s U.S. Citizenship and Immigration Services (USCIS) automated SAVE system. Workers may then determine whether an applicant has a sponsor by obtaining verification from USCIS or by obtaining proof from the noncitizen. Once workers determine that the noncitizen applicant is sponsored, they assess the applicant’s benefit eligibility based on the benefit-specific criteria. To assess income eligibility, the worker requests proof of the applicant’s income and asset information, as well as that of the sponsor, such as tax forms or other financial documents. Upon receipt of that documentation, the worker performs the deeming step to determine whether the applicant’s income, when coupled with the the sponsor’s, is below the benefit-specific financial eligibility thresholds. sponsor’s, is below the benefit-specific financial eligibility thresholds. Sponsor repayment is also implemented by staff at the administering agency, though after benefits are received by the sponsored noncitizen. Federal law and DHS regulations on affidavits of support define several steps that agencies must follow when pursuing sponsor repayment. Specifically, once benefits have been received by a sponsored noncitizen, the administering agency must contact the sponsor in writing and request repayment of the costs associated with those benefits. The written request must include several elements, such as the name and address of the noncitizen, dates benefits were provided, and amount of the benefits. If the sponsor has not responded to the written request by either repaying the benefit costs or indicating a willingness to pay after 45 days, the agency may initiate litigation to recover benefit costs from the sponsor. The number of sponsored noncitizens potentially affected by sponsor deeming is unknown; however, factors such as the restrictions on their eligibility for TANF, Medicaid, SNAP, and SSI, as well as the deeming process itself, likely limit the number affected. Overall, approximately 12.8 million legal noncitizens, including sponsored and nonsponsored, were permanently residing in the United States as of January 1, 2007, according to the most recent DHS estimates available. We estimate that around 4.2 million of those individuals obtained their legal noncitizen status via an executed affidavit of support, whereby the sponsor assumed financial responsibility for the noncitizen. Because demographic data for this population, such as income and length of U.S. residency, are unavailable, it is unknown how many of these individuals are eligible for TANF, Medicaid, SNAP, or SSI. The total number of sponsored noncitizens that apply for these benefits is also unknown, in part because most federal benefit agencies do not collect this data. However, during 2007, TANF, Medicaid, and SNAP administering agency staff used DHS’s SAVE system to verify the noncitizen status of approximately 473,000 applicants who were sponsored—a step typically taken when noncitizens apply for benefits. This is approximately 11 percent of the estimated sponsored noncitizen population in the United States as of January 1, 2007, and it is the best proxy available for the number of sponsored noncitizen applicants for these three programs. In addition, approximately 29,000 (0.7 percent) sponsored noncitizens applied for SSI benefits in that year, according to SSA data. Local benefit agency staff we spoke with during our five site visits reported that sponsored noncitizens currently constitute a small proportion of the people they encounter applying for benefits. For example, staff in many of the local offices we visited said that they encounter only a few sponsored noncitizen applicants each month, though staff in other offices said they saw these applicants more frequently. Some staff noted that the number of sponsored noncitizens seeking TANF, Medicaid, or SNAP benefits noticeably dropped, and has remained low, since PRWORA became effective. Specifically, staff cited the PRWORA requirement that sponsored noncitizens live in the United States for 5 years before they become eligible for those benefits as a significant contributor to this change. Likewise, SSA officials we spoke to reported that eligibility restrictions imposed by PRWORA caused a similar drop in the number of sponsored noncitizens pursuing SSI benefits. The requirement that most sponsored noncitizens obtain credit for at least 40 quarters of work to be eligible for SSI benefits was frequently cited by staff at regional SSA offices as causing this decrease. Staff at all regional offices reported that they continue to encounter few sponsored noncitizens applying for SSI. When sponsored noncitizens do apply for benefits, staff at most of the local offices we visited during our site visits told us that very few of these applications for TANF, Medicaid, and SNAP progress to the point where local staff deem sponsor income. The perceived low incidence of deeming was also supported by state administering agency officials through our survey, as 69 percent indicated that cases involving sponsor deeming had seldom or never occurred in their states during the past 2 years. Local officials in several offices indicated that applicants often withdraw their applications after they are made aware of the sponsor deeming rules. State and local staff also cited the reluctance or inability of the sponsored noncitizen applicant to obtain sponsor income information as a reason that the application process frequently ends before deeming occurs. Staff reported the following examples of situations: some applicants withdraw their applications because they do not wish to bother their sponsor with a request for income documentation; some withdraw because they are concerned about how their pursuit of benefits will impact their sponsors’ ability to remain in the United States or naturalize, if their sponsors are legal noncitizens themselves; some withdraw after being told their sponsor may be asked to repay the benefits in the future, as specified by federal law; some applicants’ sponsors cannot be located, resulting in denial of the application due to lack of sponsor income information; and, some applicants’ sponsors refuse to provide income information, also resulting in denial of the application. Officials from SSA also reported a low incidence of sponsor deeming during the processing of SSI benefits. For example, officials from all 10 SSA regional offices reported that deeming has occurred either rarely or never since PRWORA became effective. Specifically, because the sponsor deeming policy does not apply to sponsored noncitizens credited with 40 quarters of work, and most sponsored noncitizens are only eligible for SSI if they have satisfied the 40-quarter work eligibility requirement, deeming is inevitably rare. As a result, only sponsored noncitizens who apply for SSI and are exempted from the 40-quarter work eligibility criteria, such as those with military connections, are subject to sponsor deeming. While local staff from our five site visit states reported that sponsor deeming has been applied in a limited number of cases at their offices, the extent to which deeming has affected whether sponsored noncitizens receive benefits or the amount of benefits they receive is unknown. However, selected federal and state data that we were able to obtain provide some insight into the proportion of benefit recipients that are sponsored noncitizens. For instance, less than 0.4 percent of SSI recipients were sponsored noncitizens during the years 2004 through 2007, according to SSA. Similarly, Florida, which has a relatively large noncitizen population, reported that, in December 2008, less than 0.05 percent of TANF, Medicaid, or SNAP recipients were sponsored noncitizens. Utah and Minnesota also have few sponsored noncitizens receiving TANF and SNAP, with their proportions ranging from zero to 0.9 percent of each benefit’s total recipients. Figure 2 shows the progression of sponsored noncitizens through the benefit application and sponsor deeming processes to become recipients. Although benefit administering agencies have generally established sponsor deeming policies for TANF, Medicaid, SNAP, and SSI, based on federal regulations and federal guidance, inaction by CMS has stalled some states’ implementation of sponsor deeming in Medicaid. SSA disseminated SSI guidance to staff nationwide in 2000 addressing the sponsor deeming provisions under the 1996 legislation. As a result, field office staff nationwide have implemented sponsor deeming in SSI by following procedures and using automated systems established by SSA headquarters. Similarly, USDA and HHS issued guidance to state benefit administering agencies in 2003 on sponsor deeming for SNAP and TANF, respectively. Accordingly, all state agencies administering SNAP reported in our survey that they have established sponsor deeming policies, and administering agencies in all but five states reported having established sponsor deeming policies for TANF. In contrast, CMS has not issued formal guidance regarding sponsor deeming for Medicaid, and CMS officials stated that the agency does not currently have plans to do so. Thus, fewer administering agencies (31) have established policies for Medicaid, and officials from a few states cited the lack of guidance from CMS as a reason for their unwillingness to establish sponsor deeming policies in their Medicaid programs. Officials in one state referred to the issue’s political sensitivity and the high cost of reworking automated eligibility systems to include sponsor deeming as reasons they would not act without clear federal guidance. (See fig. 3.) Although most administering agencies have established sponsor deeming policies, agency officials reported that additional guidance in certain areas would be helpful. Specifically, between 60 and 70 percent of state administering agencies with sponsor deeming policies for TANF, Medicaid, or SNAP expressed some desire for more guidance on various aspects of deeming (see fig. 4). For example, about 70 percent reported that clarification on how to handle cases where information on the sponsor’s income is determined to be unobtainable would be moderately, extremely, or very useful. In addition, many reported that additional federal guidance on areas related to the indigence exception to deeming would be useful. Guidance in these areas is particularly important because how administering agencies handle applicants who are unable to obtain sponsor income information can have implications for how agencies apply the indigence exception, as well as whether these applicants are determined to be eligible for benefits. SSI guidance, as w as recently issued SNAP guidance, indicate that when sponsor income at when sponsor income information is unobtainable an indigence exception is to be considere information is unobtainable an indigence exception is to be considere however, federal TANF guidance does not clearly address this issue. however, federal TANF guidance does not clearly address this issue. Similarly, about 62 percent of state administering agencies reporte clarification on how to define indigence and who qualifies for the indigence exception would be moderately, very, or extremely useful. Although SSI guidance and recently issued SNAP guidance indicate at what point in the eligibility screening process indigence should be considered, and whether those who qualify for indigence must provide fu sponsor income and asset information, federal guidance for TANF again does not clearly address this issue. During our site visits to five states, we ll found that states and localities sometimes proceed differently, which can directly affect who is determined eligible for benefits. In some case staff are directed to consider whether an applicant qualifies for an indigence exception before deeming occurs. Accordingly, if the agency’s indigence criteria are met, the applicant can qualify for benefits regardl of the applicant’s ability or willingness to provide full sponsor income information and without deeming the sponsor’s income. In these cases, th eligibility worker counts only the actual amount of income or assistance the applicant receives from the sponsor, rather than deeming all or most the sponsor’s income and assets. One local official stated that eligibi lity staff in his locality assess an applicant’s eligibility for the indigence exception before deeming and, though staff do not serve many sponsored noncitizens, those that do apply typically qualify for benefits this way. In contrast, the policy in some other localities is to require applicants to provide sponsor income information before considering an exception. For these agencies, applicants who are unable or unwilling to provide sponso r income information are not considered for an indigence exception. One local official expressed concern that this requirement results in applicants whose sponsors have essentially abandoned them, being den (See fig. 5 for one example of how this process could vary.) ied benefits. This official and other local officials we interviewed, however, said that some sponsored noncitizens who may qualify for benefits through the indigence exception ultimately withdraw their application when they are told that their names and those of their sponsors must be reported to the federal government, as required by law. Administering agency officials reported that additional federal assistance on accessing DHS information needed to determine who is sponsored would also help them implement sponsor deeming policies. Specifically, state or local officials in each of the five states we visited reported difficulties accessing DHS information needed to determine whether a noncitizen applicant has a sponsor, and 65 percent of agencies administering these benefits nationwide reported that more specific policies on using SAVE in determining sponsorship would be useful. Agencies’ difficulties in using SAVE could leave them vulnerable to fraud or improper payments because SAVE is a key mechanism for verifying eligibility and sponsorship status. DHS officials stated that the agency provided SAVE users with technical assistance focused on using the automated system to obtain sponsorship information when this feature became available in 2005, and it continues to offer SAVE technical assistance through user-directed online tools and instructor-led seminars or webinars upon request. However, 30 to 40 percent of administering agencies for TANF, Medicaid, and SNAP reported that they had not received any technical assistance or communication from DHS on determining if a noncitizen applicant has a sponsor. As a result, benefit agencies report that staff commonly use SAVE to verify noncitizen information, but not all staff are aware of how to use SAVE to obtain sponsorship information. As an initial step, staff in all local offices we visited use DHS’s automated SAVE system to verify the applicant’s basic noncitizen information, such as name and admission code. Some local officials said SAVE provides this information quickly and easily. However, not all local offices we visited were aware that, by taking an additional step, the automated SAVE system can provide information on whether that person is sponsored. Specifically, the system can provide the sponsor’s name and address to administering agencies, usually within a few days. Instead, staff in some local offices used methods to verify sponsorship that either took longer or were less reliable, sometimes because they were unaware of the option to use SAVE. For example, some local officials said eligibility staff manually submit paper request forms to DHS, which usually results in a response within a few weeks but may take several months. In other local offices, staff use the noncitizen’s admission code provided by SAVE to determine whether the applicant has a sponsor. However, many different codes indicate that a noncitizen is sponsored, and DHS has not provided administering agencies with an official list specifying which group of codes indicate sponsorship. Thus, eligibility staff sometimes rely on lists of codes developed in their own offices, which may not be fully accurate. Some state and local officials said that maintaining such lists is challenging because admission codes are numerous and can change. Although federal law states that benefit granting agencies must administratively pursue sponsor repayment, DHS regulations on affidavits of support, as well as some federal benefit agency guidance, suggest that administrative pursuit of sponsor repayment is optional. Specifically, the law states that benefit administering agencies that have granted a means- tested public benefit to a sponsored noncitizen “shall request reimbursement by the sponsor.” The law also states that if the sponsor does not respond to this request within 45 days, “an action may be brought against the sponsor” in court to enforce the affidavit of support. The DHS regulations on affidavits of support, however, simply describe the process a benefit agency must go through if the agency “wants to seek reimbursement” from a sponsor. In addition, the Federal Register notice accompanying the issuance of the DHS regulations, states that “the agency may seek reimbursement” if a sponsored noncitizen receives a means- tested benefit. When asked about this apparent discrepancy between the language in the statute and that in the regulations and Federal Register notice, a USCIS Associate Counsel explained that the regulations are intended to describe the process agencies must use when they pursue sponsor repayment rather than address whether they are required to do so. As stated in the Federal Register notice, “the request for reimbursement a prerequisite to suit,” but the act “does not require the agency to sue.” Accordingly, DHS concluded that a request for reimbursement does not have to be made if the agency has no intention to sue. Agencies generally have enforcement discretion in carrying out laws, and the USCIS Associate Counsel noted that the decision of whether to pursue sponsor repayment involves an exercise of the federal benefit agencies’ discretion. He added that federal benefit agencies may require their benefit administering agencies to pursue sponsor repayment by issuing their own pertinent regulations. While the federal benefit agencies have not issued related regulations, some have issued federal guidance that also suggests pursuit of sponsor repayment is optional. The importance of this issue was noted by state administering agency officials. Specifically, over two-thirds of the TANF, Medicaid, and SNAP state agency officials responding to our survey reported that clarification on whether it is mandatory or optional for agencies to administratively pursue sponsor repayment would be moderately, very, or extremely useful. A few benefit administering agency officials we spoke with noted that their state policies indicate locals “may” pursue sponsor repayment and, in those states, local staff are not pursuing repayment. Nationwide, most state benefit administering agencies reported that they have not established policies on sponsor repayment for TANF, Medicaid, or SNAP. In contrast, SSA has established a sponsor repayment policy for sor repayment policy for SSI benefits, which applies nationwide. (See fig. 6.) SSI benefits, which applies nationwide. (See fig. 6.) However, even in states with sponsor repayment policies formally in place, state and local staff do not always understand the unique characteristics of the sponsor repayment provisions. For example, a few staff we spoke to during one of our site visits thought that the 1996 sponsor repayment provisions apply only when sponsored noncitizens receive benefits erroneously or in greater amounts than they are eligible for, as is the case with traditional benefit recovery provisions. A few others believed that the provisions require the federal government, rather than state or local administering agencies, to pursue sponsor repayment of benefits. In addition, some of the state and local officials we interviewed in one state were unaware of the sponsor repayment provisions and surprised to see that they were included in their state’s policy manuals. Officials from only two states reported in our survey that they have pursued sponsor repayment of TANF, Medicaid, or SNAP benefits. Similarly, SSA officials told us that neither its regional nor field offices have pursued sponsor repayment of SSI benefits. With the exception of SSA, federal benefit agency officials we interviewed did not know if their administering agencies had pursued sponsor repayment, as neither federal law nor regulations require that these efforts be monitored. In addition, neither federal law nor federal regulations impose penalties on administering agencies that do not pursue sponsor repayment. Of the two states that reported pursuing sponsor repayment, neither was doing so for all cases. Specifically, Connecticut and New York state officials reported that some of their local offices have pursued sponsor repayment of TANF, Medicaid, or SNAP benefits received by sponsored noncitizens. In addition, these states have pursued repayment in different ways, and the full extent of their implementation is unknown. Connecticut’s sponsor repayment policy requires that after local staff grant a sponsored noncitizen TANF, Medicaid, or SNAP benefits, they refer the case to local resource recovery staff. According to officials, the resource recovery staff are expected to initiate an investigation, which involves obtaining a copy of the affidavit of support and assessing the amount of benefits paid to the noncitizen, and then send a letter to the sponsor requesting repayment. If the sponsor does not respond to this request, local staff are to submit the case to the Connecticut Attorney General. However, while Connecticut officials consider this to be the state’s sponsor repayment policy, it is still in draft, and they report that not all offices have pursued repayment. In addition, sponsor repayment in Connecticut has been on hold statewide since March 2007, when several legal services organizations questioned the legality of this policy in a complaint letter. Connecticut state officials did not know how many cases were pursued prior to that time or how many benefits were repaid by sponsors. In New York, a state official reported that sponsor repayment is to be pursued when noncitizens receive TANF or SNAP benefits after qualifying for the indigence exception to deeming. In these cases, local staff are to send a request for repayment to the sponsor. The official noted that New York’s policies for TANF and SNAP indicate that counties should pursue sponsor repayment of benefits paid to sponsored noncitizens; however, individual counties have some discretion in determining whether to pursue sponsor repayment in each specific case. He indicated that repayment has been pursued administratively by several counties, but he did not know for how many cases. During our site visits, some state and local TANF, Medicaid, and SNAP officials suggested that pursuing sponsor repayment has clear costs and unclear financial benefits. Specifically, if a sponsored noncitizen qualifies to receive benefits after sponsor deeming occurs, it is because both the noncitizen and the sponsor are low-income. Agencies that pursue repayment of benefits from those sponsors may, therefore, expend more in administrative and potential court costs than they are able to recover from the sponsors. Because of these high relative costs, several officials we spoke to indicated that sponsor repayment, as it is currently defined in federal law, does not make sense for administering agencies to pursue. Some state and local TANF, Medicaid, and SNAP officials told us that the staff who pursue sponsor repayment sometimes have competing priorities that also discourage the pursuit of sponsors for repayment. Local recovery staff we spoke to in one state noted that their office is significantly understaffed for the investigations it is required to perform for cases of benefit fraud and overpayment. While benefit agencies have access to DHS information on sponsors, including their names and addresses, the local recovery staff we spoke to said that pursuing sponsor repayment is particularly labor intensive because there is no national- or state- established infrastructure to efficiently and effectively track down and bill sponsors for repayment. Other state officials we spoke to reported that administering agencies would like to pursue sponsor repayment through the courts, but the state or county attorneys who need to litigate these cases have other higher priority cases to pursue. Benefit administering agencies also reported that additional federal guidance on sponsor repayment is needed to assist implementation. For example, approximately two-thirds of the TANF, Medicaid, and SNAP state benefit administering agency officials responding to our survey reported that specific federal guidance on how to pursue sponsor repayment administratively, and when to pursue sponsor repayment administratively or in the courts, would be moderately, very, or extremely useful. While HHS guidance for TANF administering agencies notes that states may want to consider the sponsor’s particular circumstances, such as financial status, and other feasibility factors in determining whether to pursue sponsor repayment of TANF benefits, other federal benefit agencies’ guidance does not address this issue. In addition, during one of our site visits, state administering agency officials indicated that they would benefit from additional federal guidance on pursuing sponsor repayment for cases involving a battery exception to deeming. In these cases, the batterer may be the sponsor. However, DHS regulations require that agencies pursuing sponsor repayment send a letter to the sponsor containing several elements, including the noncitizen benefit recipient’s address. The PRWORA and IIRIRA changes to sponsor deeming and repayment in TANF, Medicaid, SNAP, and SSI, coupled with simultaneous changes to noncitizen eligibility for these benefits, were intended in part to “assure that aliens be self-reliant in accordance with national immigration policy.” While the laws’ restrictions on noncitizen benefit eligibility directly work toward this goal, sponsor deeming and repayment efforts have not been as effective, in part because benefit administering agencies do not have the information and guidance they need to implement these provisions as intended. Specifically, agencies lack clear federal guidance on implementing deeming in Medicaid and also struggle to obtain federal information on a noncitizen’s sponsorship status from DHS. As a consequence, certain sponsored noncitizens may receive means-tested public benefits based on their income and assets alone, even though they have sponsors who signed legally enforceable affidavits agreeing to support them when they became permanent residents in this country. In addition, agencies that are unaware of the most efficient and reliable method of accessing a noncitizen’s sponsorship status from DHS are vulnerable to benefit fraud and improper payments. Medicaid and TANF agencies’ efforts to implement sponsor deeming have also been affected by a lack of clear federal guidance on applying the indigence exception to deeming. Because some low-income sponsored noncitizens have sponsors who choose not to provide them financial assistance, or are unable to, inconsistencies in implementation of the law’s indigence exception to sponsor deeming may cause these noncitizens unintended harm if they are prevented from obtaining benefits. In addition, while very few administering agencies currently are pursuing sponsor repayment, agencies seem to be considering reasonable factors when deciding whether to pursue repayment. Specifically, because the sponsored noncitizens who receive means-tested benefits after deeming occurs are those that have low-income sponsors, full implementation of repayment may yield less than would be expended to achieve this outcome. While the costs of pursuing repayment may be a significant deterrent to agencies in most cases, there may be some cases in which the monetary benefits of pursuing repayment are substantial enough to outweigh the costs. For example, when sponsored noncitizens receive benefits after qualifying for the indigence exception to sponsor deeming because their sponsors have sufficient financial means but choose not to support them, agencies may be able to recover the costs of benefits from these sponsors. Under the current law, federal regulations, and federal guidance, though there appears to be inconsistency in whether pursuit of sponsor repayment is a requirement, this does not in any way bar an administering agency from pursuing sponsor repayment. Because of the potentially high relative costs associated with pursuing sponsor repayment, efforts to improve benefit administering agencies’ implementation of sponsor deeming may yield greater results. While the limited available data suggest that sponsor deeming currently affects relatively few people, ensuring that agencies have the federal guidance needed to administer these provisions is of increasing importance in the current economic environment. Specifically, it is possible that the number of sponsored noncitizens, like citizens, applying for and receiving these benefits will increase in the near term. If such an increase occurs at the same time that federal and state budgets are stretched, it will be even more important for the government to ensure that the sponsor deeming provisions, in particular, are being implemented in the way they were intended. If administering agencies are better able to determine which benefit applicants are sponsored and appropriately deem their sponsors’ income, or grant related exceptions, agencies will be less likely to either issue improper payments or unintentionally harm noncitizens who have been abandoned by their sponsors. To help ensure sponsor deeming is implemented for Medicaid, we recommend that the Administrator of CMS issue guidance to help administering agencies implement the law in this area. To improve consistency of benefit administering agencies’ application of the indigence exception to sponsor deeming, we recommend that the Secretary of Health and Human Services clarify in the guidance for TANF a suggested process for determining an applicant’s eligibility for that exception. To help benefit administering agencies access information on sponsored noncitizens, we recommend that the Secretary of Homeland Security take the following two actions: Improve information on sponsorship status of noncitizens provided through the automated SAVE system. For example, a class of admission code list that indicates sponsored noncitizens could be added to the SAVE technical assistance tools or effectively distributed to SAVE users. Provide guidance to SAVE users that improves their understanding of how to request sponsor information through the automated SAVE system rather than through manual submission of paper request forms. We provided a draft of this report to HHS, USDA, SSA, and DHS for review and comment. HHS and DHS provided written comments, which appear in appendix II and III, respectively, of the report. SSA provided no comments. In oral comments, HHS concurred with our recommendation that the Administrator of CMS issue guidance on sponsor deeming for Medicaid. However, in its written comments, HHS disagreed with our recommendation that the Secretary of HHS clarify in the TANF guidance the process for determining indigence exceptions to sponsor deeming. HHS stated that, unless there is an express law to the contrary, states have flexibility in determining TANF eligibility procedures and also asserted that it already addresses this issue in its guidance. We agree with HHS’s comments, in part, and revised the recommendation to add the word “suggested” before “process” to clarify that states have flexibility in establishing their own TANF processes for determining eligibility for the indigence exception. However, we continue to believe that additional federal guidance is needed, as over 60 percent of state administering agencies reported through our survey that federal clarification on who qualifies for the indigence exception would be useful. In addition, procedures for determining indigence exceptions varied in the states we visited, including whether states require applicants to provide full sponsor income and asset information before assessing their eligibility for this exception. While a federal HHS official previously stated that the law suggests full sponsor income and asset information does not need to be provided to determine an applicant’s eligibility for the indigence exception, the TANF guidance does not clearly state this. We believe adding this clarification to the TANF guidance would help address the inconsistencies among states and prevent unintended harm to noncitizens who lack the support of their sponsors. DHS concurred with both our recommendations to help benefit administering agencies access information on sponsored noncitizens and indicated that it plans to take actions to address these in the coming months. HHS, USDA, and DHS also provided technical comments, which we incorporated into the report as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to relevant congressional committees; the Secretaries of Agriculture, Health and Human Services, and Homeland Security; the Administrator of the Centers for Medicare and Medicaid Services; the Commissioner of SSA; and other interested parties. The report also will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. To obtain information on agency implementation of sponsor deeming and repayment, as well as the population affected, we reviewed available federal data on noncitizens, as well as available federal and state data on benefit applicants and recipients, to develop estimates of the sponsored noncitizen population and noncitizen applicants and recipients; conducted a nationwide survey of states regarding Temporary Assistance for Needy Families (TANF), Medicaid, and the Supplemental Nutritional Assistance Program (SNAP); visited five states and selected localities within each state and interviewed officials administering TANF, Medicaid, and SNAP; interviewed officials from all 10 regional offices of the Social Security Administration (SSA) regarding relevant Supplemental Security Income (SSI) policies, implementation processes and challenges, and the frequency that officials had encountered related cases; interviewed officials from relevant federal agencies and reviewed pertinent federal laws, regulations, and agency guidance; and interviewed researchers knowledgeable in immigrant and public benefit issues. We conducted this performance audit from March 2008 to May 2009, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In determining the size of the sponsored noncitizen population, we reviewed potential data sources from Department of Homeland Security (DHS) and the U.S. Census Bureau. We concluded that no single federal data source contained the relevant information necessary to generate a precise measure of this population. However, by combining data from several DHS sources we were able to produce an estimate of the number of sponsored noncitizens in the United States as of January 1, 2007. Beginning with the overall noncitizen population, the DHS Office of Immigration Statistics (OIS) reports annually on the estimated number of legal permanent residents (LPR) residing in the United States. Recently, OIS reported that approximately 12.8 million LPRs were residing in the United States as of January 1, 2007. Upon our request, OIS officials calculated that approximately 6.7 million of those LPRs entered the United States after the legally enforceable sponsor affidavit of support became effective in December 1997. In order to estimate how many of the 6.7 million LPRs were sponsored by a family member, we used two additional sources of DHS data. First, from DHS, we requested a list of all admission codes denoting sponsorship. While DHS does not have a list of all admission codes ever issued for this group, we worked with subject matter experts in DHS to compile a list of the codes that applied to noncitizens sponsored by a family member during 2006 and 2007. We determined that the codes from these 2 years would be sufficient for our analysis, as the DHS experts attested that the resulting list would include most of the codes applied to the sponsored noncitizens relevant to our analysis. OIS then matched this list of codes with their records of noncitizens who obtained LPR status in 2006 and 2007 to estimate the percentage that were sponsored. OIS estimated that 62.5 percent of LPRs obtaining this status in 2006 and 2007 were sponsored. By applying OIS’s sponsored LPR percentage of 62.5 percent to their estimate of 6.7 million potentially sponsored LPRs, we estimate that 4.2 million sponsored noncitizens were residing in the United States as of January 1, 2007. The data used in this estimate are limited by assumptions made about emigration, mortality, and naturalization, which are discussed in OIS’s report, “Estimates of the Legal Permanent Resident Population in 2007.” The estimate is also limited by our assumption that the LPR percentage for new noncitizens in 2006 and 2007 reasonably reflects that of past years. Despite these limitations, we determined, in collaboration with an OIS official knowledgeable of the data, that our methodology and the data used are sufficiently reliable for establishing a rough estimate of the size of the sponsored noncitizen population. In determining the number of sponsored noncitizens applying for TANF, Medicaid, and SNAP benefits during 2007, we reviewed federal data from the U.S. Department of Health and Human Services (HHS) and the U.S. Department of Agriculture (USDA) and found no source that contained information specific to sponsored noncitizen applicants. We instead determined, and United States Citizenship and Immigration Services (USCIS) officials confirmed, that DHS’s Systematic Alien Verification for Entitlements (SAVE) system database is the best available proxy for noncitizen applicants. Benefit administering agency staff from all states typically access SAVE to obtain and verify citizenship status for noncitizen TANF, Medicaid, and SNAP applicants. Each time that a benefit agency staff member accesses SAVE to obtain noncitizen information, the system captures several pieces of data, including the date of the query and the admission code of the applicant. Therefore, to estimate the number of sponsored noncitizen applicants, USCIS examined data on SAVE usage by benefit agency staff in 2007. Specifically, USCIS matched the SAVE database with the list of admission codes applied to noncitizens sponsored by a family member during 2006 and 2007. Through this analysis, USCIS estimated that benefit agency staff obtained information from SAVE on approximately 473,000 sponsored noncitizens in 2007. As noted previously, because SAVE is typically accessed by benefit agency staff when they are assessing noncitizen applicants, 473,000 is our proxy for sponsored noncitizen TANF, Medicaid, and SNAP applicants. Limitations on this estimate include the possibility that SAVE is not used by benefit agency staff 100 percent of the time, which would result in this being an underestimation. Conversely, benefit agency staff may access SAVE multiple times for the same applicant, which would result in this being an overestimation. While we are not able to estimate how often either of these situations occurs, we determined that the SAVE data are sufficiently reliable to use as a proxy for sponsored noncitizen applicants of TANF, Medicaid, and SNAP nationwide. To provide estimates on the number of noncitizens, both sponsored and nonsponsored, receiving TANF, Medicaid, and SNAP benefits, we relied on data we were able to obtain from the limited number of states that maintain relevant recipient information for those benefits. Florida officials, after analyzing state benefit data for December 2008, reported that approximately 5.20, 8.12, and 9.50 percent of TANF, Medicaid, and SNAP benefit recipients, respectively, were noncitizens in that month. Further, they reported the corresponding percentages for sponsored noncitizens were approximately 0.01, 0.04, and 0.03. Utah and Minnesota officials were also able to provide us with information on TANF and SNAP recipients in their states, based on analysis of state benefit data for fiscal year 2008. Utah officials reported that approximately 0.33 percent and 0.87 percent, respectively, of TANF and SNAP recipients were sponsored noncitizens. Minnesota reported that no sponsored noncitizen received TANF or SNAP benefits in that state during fiscal year 2008. Based on our conversations with officials in each state that provided the data, we determined that the data are sufficiently reliable for the purposes of this report. SSA is the only federal benefit agency to collect and maintain applicant and recipient data on sponsored noncitizens. To estimate the number of sponsored noncitizen SSI applicants and recipients during 2007, SSA officials analyzed agency data on SSI applicants for that year. Specifically, officials analyzed citizenship and sponsorship codes for each SSI applicant in order to identify noncitizen applicants. Based on this analysis, SSA officials reported that approximately 5.5 percent of SSI applicants were noncitizens in 2007, and approximately 1.1 percent were sponsored noncitizens. Similar analysis on recipient data found that approximately 9 percent of SSI recipients were noncitizens in that year, and approximately 0.3 percent were sponsored noncitizens. However, these are overestimates of our target group of sponsored noncitizens because they include all sponsored noncitizens, both those who entered the United States before the legally enforceable affidavit of support was required and those who entered after. Because SSA does not maintain data on each noncitizen applicant’s or recipient’s date of entry into the United States, we were unable to isolate our target group of sponsored noncitizens. Despite this limitation, and based on our conversations with the SSA official who provided the data, we determined that the data are sufficiently reliable for the purposes of this report. To better understand state implementation of sponsor deeming and repayment for TANF, Medicaid, and SNAP, we conducted a Web-based survey of state administrators of each of these benefits in all 50 states and the District of Columbia. The survey was conducted between August and October 2008 with 100 percent of state administrators responding (a total of 153). The survey included questions about the extent that states have encountered benefit cases involving sponsor deeming, state policies on sponsor deeming and repayment, state efforts to pursue repayment administratively and through the courts, the availability of relevant data on sponsored noncitizen applicants and recipients, implementation challenges, and areas of assistance from federal agencies that have been or may be useful. Because this was not a sample survey, there are no sampling errors. However, the practical difficulties of conducting any survey may introduce nonsampling errors, such as variations in how respondents interpret questions and their willingness to offer accurate responses. We took steps to minimize nonsampling errors, including pretesting draft instruments and using a Web-based administration system. Specifically, during survey development, we pretested draft instruments with TANF, Medicaid, and SNAP administrators from six states (Arizona, Connecticut, Florida, Mississippi, South Dakota, and Texas) between June and July 2008. We selected the pretest states to provide variation in the proportion of state residents that are LPRs, as well as geographic location. In the pretests, we were generally interested in the clarity, precision, and objectivity of the questions, as well as the flow and layout of the survey. For example, we wanted to ensure definitions used in the surveys were clear and known to the respondents, categories provided in closed-ended questions were complete and exclusive, and the ordering of survey sections and the questions within each section was appropriate. We revised the final survey based on pretest results. Another step we took to minimize nonsampling errors was using a Web-based survey. By allowing respondents to enter their responses directly into an electronic instrument, this method automatically created a record for each respondent in a data file and eliminated the need for and the errors associated with a manual data entry process. To further minimize errors, programs used to analyze the survey data and make estimations were independently verified to ensure the accuracy of this work. While we did not fully validate specific information that states reported through our survey, we took several steps to ensure that the information was sufficiently reliable for the purposes of this report. For example, we reviewed the responses and identified those that required further clarification and, subsequently, conducted follow-up interviews with those respondents to ensure the information they provided was reasonable and reliable. In our review of the data, we also identified and logically fixed skip pattern errors⎯questions that respondents should have skipped but did not. On the basis of these checks, we believe our survey data are sufficiently reliable for the purposes of our work. To better understand administering agency implementation of sponsor deeming and repayment in TANF, Medicaid, and SNAP, we conducted site visits to five states, and selected localities in those states, between May and September 2008. The states and localities visited included California— Los Angeles and Orange Counties; Connecticut—Cities of Bridgeport and Stamford; Florida—Broward and Miami-Dade Counties; Georgia—DeKalb, Fulton, and Gwinnett Counties; and Minnesota—Hennepin and Ramsey Counties. The five states were selected primarily because they had significant populations of LPRs and low-income foreign-born residents, and they provided geographic variation. In addition, the states selected varied in the proportion of their cash assistance recipients that were noncitizens and in their provision of state-funded assistance programs for noncitizens. The localities were selected because they were within a region of the relevant state that had experienced significant growth in its LPR population in recent years or had a historically large base LPR population in the state. We visited two to five local benefit agency offices in each state. We cannot generalize our findings beyond the states and localities we visited. During the site visits, we interviewed state and local administering agency officials. Through interviews with state officials, we collected information on states’ sponsor deeming and repayment policies, possible variation in local office implementation, challenges to implementation, and areas in which officials thought federal assistance has been or would be useful. Through interviews with local officials, we gathered information on the processes staff use to determine eligibility for noncitizens, the frequency that they have encountered cases involving sponsored noncitizens and sponsor deeming, implementation of sponsor repayment policies, implementation challenges, and areas that staff indicated they would like additional assistance. During some site visits, we observed the systems and tools eligibility staff use to process an application. During our interviews with both state and local officials, we also inquired about the availability of data on sponsored noncitizen benefit applicants and recipients. Heather McCallum Hahn, Assistant Director; Rachel Frisk, Analyst-in- Charge; Theresa Lo; David Perkins; Heather Whitehead; Jean McSween; Cathy Hurley; Kirsten Lauber; Doreen Feldman; Alexander Galuten; Susan Bernstein; and Mimi Nguyen also made significant contributions to this report. | Federal law restricts noncitizens' access to public benefits, including Temporary Assistance for Needy Families (TANF), Medicaid, the Supplemental Nutrition Assistance Program (SNAP), and Supplemental Security Income (SSI). Further, when noncitizens who legally reside in this country through sponsorship of a family member apply for these benefits, they are subject to sponsor deeming, which requires benefit agencies to combine noncitizens' incomes with those of their sponsors to determine eligibility. Sponsors are also financially liable for benefits paid to the noncitizen, and benefit agencies must seek repayment for these costs. GAO was asked to analyze (1) what is known about the size of the noncitizen population potentially affected by the sponsor deeming requirements for TANF, Medicaid, SNAP, and SSI; (2) to what extent have agencies implemented sponsor deeming; (3) to what extent have agencies implemented sponsor repayment. To address these, GAO analyzed federal data, surveyed states, and interviewed federal, state, and local officials. The number of sponsored noncitizens potentially affected by sponsor deeming is unknown; however, federal restrictions on their eligibility for TANF, Medicaid, SNAP, and SSI, as well as other factors, likely limit the number affected. The most recent data available suggest that 11 percent (473,000) of sponsored noncitizens applied for TANF, Medicaid, or SNAP during the course of 2007, and less than 1 percent (29,000) applied for SSI. In addition to federal restrictions, benefit agency officials reported that applicants' reluctance or inability to obtain sponsor income information further reduces instances of deeming. Nationwide, most benefit administering agencies have established sponsor deeming policies for TANF, SNAP, and SSI. However, agencies in 20 states have not done so for Medicaid, due in part to the lack of federal guidance for Medicaid on this requirement. Yet, even among administering agencies that have established policies, many expressed the desire for more federal guidance on various aspects of deeming. For example, over 60 percent of state officials reported that additional clarification on applying an exception to deeming when noncitizen applicants are indigent would be useful. Local officials also reported difficulties accessing information from the Department of Homeland Security needed to determine whether an applicant is sponsored--an essential part of the deeming process. Few agencies have taken steps to implement sponsor repayment of TANF, Medicaid, SNAP, and SSI, due in part to inconsistent federal guidance. While law requires that agencies administratively pursue repayment, federal regulations and guidance suggest it is optional. In total, only two states have pursued sponsor repayment. Benefit agency officials reported that several factors discourage pursuit of repayment. Specifically, they reported that the process involves high relative costs since noncitizens who receive benefits after deeming only qualify because both they and their sponsors have very low incomes. Officials also reported that local staff who pursue repayment for these benefits sometimes have competing priorities. |
Iraq has had a long history of displacement due to wars and the policies of the Saddam Hussein regime. That regime instituted “Arabization” policies to force out many non-Arabs from Kirkuk and the surrounding areas and replace them with Arab citizens to strengthen the regime’s political control over the areas’ oil fields and fertile lands. Displacement occurred during the Iran-Iraq war in the 1980’s; the campaign against the Kurds, which intensified after the war in 1988; the draining of the marshes in southern Iraq during the war and again after the first Gulf War in 1991; and the 2003 fall of the Saddam Hussein regime. UNHCR reported in December 2009 that an estimated 2.76 million individuals were displaced in Iraq, 1.2 million of which had been displaced prior to 2006. The latest wave of large-scale displacement occurred after the February 2006 bombing of the Al-Askari Mosque in Samarra, which triggered a rise in sectarian violence. According to State and UN reports, insurgents, death squads, militias, and terrorists increased their attacks against civilians in 2006. According to UNHCR and IOM, there was a sharp increase in the numbers of Iraqis abandoning their homes for other locations in Iraq and abroad as a result of the sectarian intimidation and violence that erupted during this period. IOM reported that the majority of the Iraqi displacement occurred in 2006 and 2007. According to IOM, as of September 2008, about 90 percent of the post-2006 IDPs in Iraq originated from Baghdad, Diyala, and Ninewa governorates (see fig. 1). According to IOM, 4 years after the Al-Askari bombing, displaced families are returning and new displacement is rare; however, the number of those displaced who had returned (returnees) remains well below the estimated number of those who remain displaced. As of the end of 2009, UNHCR estimated that of those who were displaced before and after the Al-Askari bombing, 745,630 IDPs and 433,696 refugees had returned. IOM reported in February 2010, that of those who were displaced after the 2006 Al- Askari bombing, IOM returnee field monitors had identified an estimated 374,166 returnees. Additionally, the numbers of returnees varies by governorate, with Baghdad experiencing the largest share of IDP and refugee returns, according to UNHCR. The majority of those who initially returned were IDPs rather than refugees, which is a pattern that has been seen in other displacement situations worldwide, according to IOM and UNHCR officials. IOM reported in February 2010, that its assessments of an estimated 1.3 million IDPs identified by its field monitors, showed that 49 percent of all post-Al-Askari bombing IDPs want to return to their places of origin, 29 percent want to remain and integrate into their current places of displacement, 19 percent want to resettle to a third location, and 3 percent are waiting to make a decision. According to UNHCR officials, displaced Iraqis tend to be educated and come from urban, middle-class backgrounds, which is in sharp contrast to displaced communities in other nations. UNHCR also reported that the displaced Iraqi population comprises Sunnis, Shias, Christians, and other groups that were forced to relocate to areas where they constitute the dominant groups. According to IOM, 58 percent of the 1.3 million IDPs that they had assessed reported to be Shia Muslim and 33 percent reported to be Sunni Muslim, as of February 2010 (see fig. 2); however, religious affiliations and ethnicity varied by governorates. According to UNHCR, 21 percent of the Iraqi refugees that were actively registered in neighboring countries at the end of 2008 identified themselves as Shia Muslims, and 56 percent identified themselves as Sunni Muslims (see fig. 3). The UN’s Guiding Principles on Internal Displacement defines an internally displaced person as one who has been forced or obliged to leave his or her home as a result of armed conflict, generalized violence, violation of human rights, or disaster, but has not crossed an international border. A refugee, as defined by the 1951 UN Convention Relating to the Status of Refugees and its 1967 protocol, is a person who “owing to a well- founded fear of being persecuted for reasons of race, religion, nationality, membership of a particular social group, or political opinion, is outside the country of his nationality, and is unable or, owing to such fear, is unwilling to avail himself of the protection of that country.” According UNHCR’s Handbook for Repatriation and Reintegration Activities, reintegration is a process that should result in the disappearance of differences in legal rights and duties between returnees and their compatriots and the equal access of returnees to services, productive assets, and opportunities. UNHCR’s Handbook also states that such a process assumes that refugees return to societies that are more or less stable, and, when this is not the case, returnees and communities in areas of return should benefit equally from improved access to productive assets and social services. According to UNHCR, voluntary repatriation and reintegration is the preferred durable solution for refugees. Iraqi and U.S. government entities, international organizations, and NGOs play significant roles in addressing Iraqi displacement in Iraq and the region. For information on the key responsible entities and their respective roles, see appendix II. Problems in securing a safe environment, property and shelter, income, essential services, and government capacity and commitment may impede large numbers of returns and the reintegration of displaced Iraqis, according to U.S. government, UNHCR, and IOM officials. UNHCR considers that the basic conditions necessary to encourage and sustain large-scale returns to Iraq have not been established. UNHCR had predicted large-scale returns for 2009 after security conditions had improved in the latter half of 2007 and in 2008, but they did not materialize, according to U.S. government, UNHCR, and IOM officials. UN, IOM, and U.S. government officials agree that the decision to return and the ability to reintegrate involve a complex set of factors that may vary by location and individual circumstance. Moreover, according to the UN, IOM, and NGOs, many of these factors also negatively affect vulnerable Iraqis in the communities that host IDPs and Iraqis who did not have the means to flee the conflict or the ensuing economic hardships. Although the overall security situation in Iraq has improved since 2006, the actual and perceived threat across governorates and neighborhoods continues to impede Iraqi returns and reintegration, according to U.S. government, UNHCR, and IOM officials. According to the UN, voluntary return is the preferred solution, but Iraqis should not be encouraged to return until the security situation allows for large-scale return and sufficient monitoring of returns. According to DOD, overall violence in Iraq, after peaking in 2007, remains at its lowest level in 5 years. However, the level and nature of violence has varied by governorate. DOD reported that from December 2009 to February 2010, about 73 percent of the attacks occurred in 4 out of the 18 governorates—Baghdad, Diyala, Ninewa, and Salah al-Din. The first three of these governates account for 89 percent of the displacement occurring after the February 2006 Samarra Al-Askari Mosque bombing, according to the UN. In contrast, the Kurdistan Region, with its relatively homogenous population and the presence of the Kurdish security forces, remained relatively safe and stable, according to DOD. Many displaced Iraqis may be afraid of returning because of the fear of violent reprisals from militants and members of opposing sects, according to USAID and UNHCR officials. IOM reported in 2008 that returnees were threatened, shot at, or killed after returning home. An MODM official reported that one of the initial families that had returned to a Baghdad neighborhood was killed as a warning to others not to return. UNHCR and IOM officials stated that some displaced Iraqis, particularly those from targeted minority groups, have no plans to return out of fear of persecution. According to the UN, although a decrease in violence in Iraq has been observed, grave and systematic human rights violations persist and remain largely unreported. The UN also reported that violence against professionals, women, and members of minority communities occur often and are rarely punished. Moreover, many displaced Iraqis and returnees have had difficulties in accessing services, including those provided by humanitarian organizations, because of obstacles such as curfews, checkpoints, and areas affected by intense fighting, according to UNHCR, IOM, other UN agencies, and NGOs. In addition, according to UNHCR, the precarious security situation is requiring UNHCR to increase investments in the security of staff and may continue to limit UNHCR’s mobility inside Iraq (see fig. 4). The UN, UNHCR, IOM, and International Committee of the Red Cross (ICRC) cautioned that while access may be improving overall, the security situation could deteriorate again, which could limit their access to the population. Problems in securing property restitution or compensation and shelter have made it difficult for displaced Iraqis to return and reintegrate or integrate elsewhere in Iraq, according to UNHCR and IOM officials. According to a 2009 United States Institute of Peace (USIP) report, the lack of policies addressing displacement-related property issues is a major obstacle to returns and may prolong instability, hinder reconciliation, and nurture grievances along ethnic or sectarian lines. In November 2009, IOM reported that about one-third of surveyed returnees found their homes in bad condition. In February 2009, IOM reported that 43 percent of the post-2006 Samarra bombing IDPs surveyed did not have access to their homes, primarily because the property was occupied or destroyed (see fig. 5); and that 38 percent did not know the status of their property, often because they could not safely access it. According to the 2009 USIP report and IOM, hundreds of thousands of displaced families are estimated to have homes that are occupied or used by strangers, such as militants, squatters, other displaced Iraqis, or, in rare cases, Iraqi Army or other government officials, sometimes resulting in multiple scenarios of competing claims. Many displaced Iraqis have also lost personal property, business stock, usage rights for farm land, and farming equipment, according to the report. Moreover, a number of returnees with leases to apartments have had difficulties in reclaiming their accommodations because, in some cases, landlords took advantage of their tenants’ absence to re-lease the properties at higher rents, according to the 2009 USIP report. Further complicating property restitution and compensation are the Iraqi government’s policies that distinguish between Iraqis who were displaced before and after the U.S.-led invasion in March 2003. The implementation of these policies has yet to be proven effective for either group. For pre- March 2003 cases, the Commission for the Resolution of Real Property Disputes was established in 2006 to address property issues resulting from the Ba’athist regime’s policies of forced displacement, according to the 2009 USIP report. According to the report, the commission’s “quasijudicial” system is not well adapted to the nature and number of cases and thus is cumbersome and prone to delays. As of January 2009, the commission had decided about 67,000 cases of the approximately 150,000 cases filed since March 2004. However, due to appeals and re-reviews, only about 30,000 decisions were deemed final and enforceable and compensation was paid only in about 1,000 cases. Moreover, USIP reported that data are not available regarding the number of claimants— with decisions in their favor—able to reoccupy their houses or land. An Iraqi government official stated that many IDPs typically require more assistance than what the government provides to replace lost properties and rebuild or repair damaged homes. For post-March 2003 cases, the Iraqi government initially deemed that property violations were the fault of terrorists and criminals and thus were a law enforcement problem that could be resolved in the courts, if needed. According to the 2009 USIP report, the existing legal framework may have been inadequate to fairly resolve complex displacement cases and to effectively handle the potentially large caseload. In 2008, according to the U.S. government and IOM, the Iraqi government recognized the need to further address property issues and thus initiated changes to its policies and efforts. The property restitution and compensation problem is further exacerbated by the reported lack of adequate shelter. UN-HABITAT reported in July 2009 that Iraq had a housing shortage of at least 1.5 million units, and demand was increasing. According to UN-HABITAT, just over 70 percent of Iraqis lived in urban areas, and more than 10 percent of the houses in these areas had more than 10 occupants and more than 35 percent had 3 or more people per room. According to IOM, displaced families continue to have difficulty in finding adequate housing in their places of displacement, even several years after leaving home. IOM reported that IDPs’ shelter arrangements include renting, moving in with friends and relatives, occupying empty public buildings, establishing collective settlements, and other arrangements (see fig. 6). However, these arrangements may not be sustainable because they pose costs to both the displaced and their host communities. For example, IOM reported that the majority of internally displaced Iraqis are living in rental accommodations, but, as time passes, rent prices increase and their ability to pay decreases. Friends and relatives, already struggling to provide for themselves, are additionally burdened by housing the displaced, according to UN- HABITAT. Moreover, IDPs living in settlements or public buildings may often be at risk from eviction by local authorities or private owners. Less than 1 percent of displaced Iraqis live in tent camps. According to USAID and international organization officials, without employment or other income-generating opportunities, displaced Iraqis may not return to their former communities or may have difficulty in reintegrating. In November 2009, IOM reported that 34 percent of the heads of returning households that it had surveyed stated that they could not find employment, even though they were able to work. IOM also found that employment rates were higher in certain governorates, such as Baghdad. Employment for IDPs has also been scarce and varied across the governorates, according to USAID. According to IOM, 31.7 percent of the IDP families assessed had at least one employed family member as of December 2009. In general, employment in Iraq is scarce, according to USAID and UNHCR officials. The UN reported in January 2009 that the unemployment rate was 18 percent. In addition, the UN and IOM estimate that over 50 percent of the active population is unemployed or underemployed, and that over 55 percent may face difficulties in covering basic living costs. Underemployment and poverty pose a significant risk to the reconciliation and stability of the country, according to the UN. Moreover, IOM officials said that regaining former employment is difficult for displaced Iraqis. According to an international organization, the largest employer in Iraq is the government, but, according to IOM, returnees have difficulty in regaining prior government employment, either due to discrimination or corruption. In March 2010, State reported allegations of employment discrimination by several ministries based on religious, ethnic, and political affiliations. The agricultural sector is the second- largest contributor to the economy, according to the UN. IOM reported the need to provide returnees and IDPs in rural areas who want to farm with the necessary means, such as land, seeds, fertilizers, tools, poultry, and cattle. In addition, according to IOM officials, many skilled professionals became displaced, and the longer they are displaced, the greater the likelihood that their skills will become outdated. Furthermore, MODM reported that Iraq lacks procedures to recognize professional certificates and diplomas acquired abroad. Officials from the United Nations Office for the Coordination of Humanitarian Affairs (OCHA) and UNHCR are concerned that Iraqi refugees from professional or middle-class backgrounds may be reluctant to return for low-skilled and low-paying jobs, which could potentially affect government capacity and economic growth in Iraq. The lack of access to food and nonfood items is a deterrent to returns and reintegration, according to UNHCR officials. The top three priority needs identified by returnee families assessed by IOM were food (over 60 percent), fuel (over 40 percent), and other nonfood items (over 40 percent), according to a November 2009 IOM report. According to the UN, most Iraqis, including IDPs and returnees, receive monthly food rations from the Public Distribution System. According to a UN report, although the Public Distribution System largely shields Iraqis from rising global food costs, local prices have climbed higher than global prices. According to IOM, IDP families have also reported having no or partial access to the Public Distribution System. The World Food Program (WFP) reported in 2008 that distribution across the country had been uneven due to the conflict. Many IDP families have had difficulties in obtaining the proper documents to register for the Public Distribution System in their new locations, which is required to obtain rations, according to UNHCR officials. According to USAID officials, the re- registration of Public Distribution System cards was improving as of January 2010. Additionally, the rise in fuel prices and the difficulties of obtaining fuel have placed considerable burden on many Iraqis, including IDPs and returnees, according to IOM and the ICRC. IOM further reported that the returnee families it assessed listed fuel as one of the highest priority needs. The UN reported in December 2008, that about 40 percent of Iraqis continued to suffer from poor water quality and sanitation services due to dysfunctional systems, network breakdowns, aging infrastructure, and frequent power supply interruptions (see fig. 7). According to IOM, access to potable water is a major concern of IDPs, returnees, and Iraqis in general. Although approximately more than 80 percent of IOM-assessed returnees in 2009 had access to municipal water networks, the water may not have been potable, according to IOM. A 2007 survey of Iraqi households also found that although 81.3 percent of individuals lived in dwellings connected to public water networks only 12.5 percent of these individuals had reported that their supply of water was constant. According to the UN, sewage is also a common sight in many neighborhoods, and solid waste management at the family level is a serious problem (see fig. 8). For example, IOM reported in May 2009 that because of blocked sanitation networks, several houses in Baghdad had been damaged by water and left structurally compromised or had collapsed. ICRC reported that a number of water treatment plants in Iraq had either shut down or reduced their operating capacity as a result of the electricity supply issue. According to DOD, the electricity supply for many Iraqis is still intermittent and unpredictable, although the gap between demand and supply has narrowed. UN-HABITAT reported in October 2009 that Iraqis experienced, on average, 16 hours of power interruption per day. Displaced and vulnerable Iraqis may also find challenges in obtaining access to health care. In November 2009, IOM reported that more than one-third of assessed families reported having no access to health care, but that this figure was higher for certain governorates (e.g., just over one- half of the assessed families in Baghdad). Lack of access is most often due to the distance to the nearest health care center or lack of equipment and staff. ICRC reported that of the 34,000 doctors registered in 1990, at least 20,000 have left the country, and 2,200 doctors and nurses have been killed since 2003. In addition, ICRC reported that hospitals and other health facilities often lack drugs and other essential items. In 2009, OCHA reported that mental health issues were also a concern, because many Iraqis had been affected by conflict and displacement. In addition, IDPs may not have the money to secure transportation to public health facilities or purchase medication and services that are not available through the public system. Furthermore, according to the World Health Organization (WHO) and Internal Displacement Monitoring Centre officials, discrimination based on sectarian grounds and fear of traveling to health facilities could also limit access to health care. According to WHO officials, there is little data on the health needs of displaced Iraqis. Although Iraq’s public health system does generate up-to- date information, WHO officials said that they have had to rely on surveys conducted in 2006/2007 for much of their information. These officials also said that without sufficient health data, decision makers will not have the information to identify vulnerable populations, such as displaced Iraqis, and develop strategies to meet Iraq’s health needs. According to IOM, just under 10 percent of the returnee families with school-age children in Iraq reported having no access to education; however, this figure varies greatly across the country. For example, almost two-thirds of returnee families reported having no access to schooling in the Babylon governorate. According to USAID, schools have been damaged and looted (see fig. 9). According to OCHA, military interventions, during March and April, 2008, in Baghdad caused the closing of 22 schools, 11 of which sustained major damages. During this period, curfews were imposed and attendance dropped to 30 percent. In addition, IOM reported in April 2009 that many Christian IDPs from Mosul were unable to enroll in school because they lacked documentation. Furthermore, according to the UN Children’s Fund (UNICEF), a number of Iraqi schools are overcrowded and lack proper sanitation facilities, which would make it difficult for these facilities to absorb returning displaced children. Also, UNICEF reported in January 2009 that a number of students who were returning to their homes after being displaced may not have registered with the government to receive standard school supplies. IOM and UNHCR officials said that shortfalls in the Iraqi government’s capacity and commitment have limited the potential for reintegrating displaced Iraqis. According to U.S. and international assessments and officials, years of neglect, a highly centralized decision-making system under the former regime, and looting in 2003 decimated Iraq’s government ministries (see fig. 10). In March 2009, GAO reported that Iraqi ministries had significant shortages of personnel who could formulate budgets, procure goods and services, and perform other ministry tasks. GAO also reported that violence and sectarian strife; the exodus of skilled labor from Iraq; and the weakness in Iraqi procurement, budgeting, and accounting procedures limited the Iraqi government’s ability to spend its capital project budget. According to U.S. and UNHCR officials, although there has been some progress, the Iraqi government appears to be noncommittal in addressing displacement issues. For example, the Iraqi Prime Minister has appointed a senior official to coordinate IDP and refugee issues, but the Iraqi government does not appear ready to direct significant resources to assist refugees. DOD also reported in 2009 that given other priorities, engaging Syria and Jordan on the return of a largely Sunni refugee population remained a low priority for the Iraqi government. Furthermore, U.S. and international organization officials said that the Iraqi government has not given MODM the authority and capacity to direct or coordinate government efforts to address displacement issues. In addition, IOM officials stated that many of the Sunni Iraqi refugees will not return until they see true national reconciliation in Iraq because they do not trust the current Iraqi government to protect them. Although international and nongovernmental organizations and U.S. and Iraqi governments have taken action to address the impediments that Iraqi IDPs and refugees face to return and reintegration, the extent to which these efforts will result in reintegration (i.e., sustainable returns) is unknown. The extent to which these projects specifically target and impact reintegration is not consistently measured or reported in the aggregate against international goals for reintegration. U.S. goals and outcomes for these efforts were classified or considered sensitive and thus an unclassified assessment and reporting of progress made toward U.S. goals could not be made. Moreover, the Iraqi government has made limited progress due to the lack of uniform government support and capacity, according to international community officials. A March 2010 report stated that the rates of return of IDPs and refugees had not increased in the last year. International and nongovernmental organizations, supported by U.S. and other donor funding, have initiated many projects to address impediments to returns and reintegration of displaced Iraqis. However, according to international organization and U.S. government officials, the extent to which these projects specifically target and impact reintegration is not consistently measured or reported in the aggregate against measurable goals and objectives for reintegration. According to the UN, international efforts focus on all vulnerable Iraqis. Thus, these projects target a mix of vulnerable populations in Iraq, including IDPs, returning refugees, non- Iraqi refugees, other conflict victims, and the communities that host them (see table 1). According to international and U.S. government officials, host communities are often equally vulnerable and including them reduces the likelihood of resentment toward providing assistance to IDPs and returning refugees. Through its 2009 Consolidated Appeal for Iraq and the Region, the UN coordinated most international organization efforts and funding to meet humanitarian needs in Iraq and for Iraqi refugees and the communities that host them in neighboring countries. In June 2009, the UN revised its consolidated appeal by increasing the amount requested from $547.3 million to $650.4 million. According to OCHA, the U.S. government contributed about 71 percent of new contributions to the 2009 appeal and funded at least 56 percent of all reported 2009 assistance to Iraq and the region. The UN did not issue a consolidated appeal for Iraq and the region for 2010. Instead, UN assistance requests are primarily found in three documents: The Iraq 2010 Humanitarian Action Plan, developed by 9 UN agencies, IOM, and 12 NGOs operating in the country, focuses on overall humanitarian assistance for Iraq, including efforts that also target IDPs and returnees, and requests about $193.6 million. The UNHCR Global Appeal 2010-2011 for Iraq focuses on IDPs, returning refugees, other refugees and stateless people inside Iraq and requests about $264.3 million for 2010, of which about $31.1 million is targeted for reintegration projects that include returnees and host communities. The Regional Response Plan for Iraqi Refugees focuses on Iraqi refugees and the host communities in 12 neighboring and other countries and requests about $364.2 million. In a February 2009 speech, President Obama stated that diplomacy and assistance were required to help displaced Iraqis. This speech established a policy that National Security Council (NSC), State, USAID, and DOD officials follow in finding durable solutions for displaced Iraqis, including reintegrating voluntary returns within and to Iraq. While the following information provides examples of U.S. diplomatic efforts and assistance, we note that overall U.S. goals, objectives, and outcomes for U.S. efforts were classified or considered sensitive information during our review. Accordingly, the information in this section is descriptive and can provide no assessment of the extent to which U.S. assistance is achieving its intended goals. Subsequent to concluding our field work and our exit meetings with U.S. agencies and the NSC, the NSC provided GAO with an unclassified summary, that had not been made public, of a classified May 2009 U.S. government strategy regarding support for returning Iraqi refugees and IDPs. The stated U.S. goal was to create conditions inside key areas of Iraq that will allow the maximum number of voluntary returns to be sustainable. Objectives were provided for fiscal years 2009-2011. However, the NSC also noted that the summary prepared for GAO in July 2010 was based on a “historical document”, should be viewed in that context, and that it had not been updated to reflect the current situation. In August 2009, the White House announced that the NSC’s Senior Director for Multilateral Affairs and Human Rights would serve as its coordinator on Iraqi refugees and IDPs. In November 2009, the Senior Director and the Assistant Secretary of State for Population, Refugees, and Migration met with the Iraqi government’s refugees and IDP coordinator and the head of the Iraqi Prime Minister’s Implementation and Follow-up Committee on National Reconciliation to discuss the challenges related to the return and reintegration of displaced Iraqis. The officials subsequently issued a joint statement that described steps agreed upon by both the U.S. and Iraqi governments to assist Iraq’s displaced population and support national unity. One of the agreed-upon steps was to promote cooperation with other nations to broaden international support efforts and thereby make returns more sustainable. In August 2009, the White House also assigned a senior Foreign Service Officer to take up the post of Senior Coordinator for Iraqi Refugees and Displaced Persons at the U.S. embassy. The Senior Coordinator is responsible for coordinating the U.S. government’s work in Iraq on refugees and IDPs and representing the United States on Iraqi displacement issues with the Iraqi government, the international community, and NGOs. To provide humanitarian and developmental assistance, the U.S. government primarily contributes funds to UN appeals and provides bilateral assistance through its implementing partners. U.S. funding does not solely target returnees; it supports programs that include assistance for both returnees and other vulnerable populations. As of September 30, 2009, State and USAID had obligated about $1.9 billion and expended about $1.5 billion in fiscal years 2003 through 2009 for all Iraq-related humanitarian assistance in Iraq and the region. This total included funds to assist Iraqi refugees and the communities that host them in neighboring countries (see app. III for funding and intended beneficiaries). Table 2 provides State’s Bureau of Population, Refugees, and Migration’s (PRM) implementing partners, activities, country locations, and funding obligated for fiscal year 2009. Of these activities, about $45 million of the $303.4 million obligated in fiscal year 2009 was allocated to IOM and UNHCR for projects under State’s new “returns program” in Iraq. These two programs are targeted to benefit returnees and other vulnerable Iraqis. USAID’s Office of U.S. Foreign Disaster Assistance (OFDA) funds and oversees a wide range of humanitarian assistance activities that are implemented by a number of NGO and UN partners who provide programs for IDPs and other vulnerable Iraqis. Table 3 provides USAID/OFDA’s implementing partners, activities, locations, and funding obligated for fiscal year 2009. Of the about $83.4 million obligated, $60 million was for programs intended to provide direct assistance to returning families; support to communities with significant numbers of current or anticipated returnees; and general assistance to vulnerable populations, regardless of displacement status. According to USAID/OFDA, since Iraq is transitioning from an emergency to a development phase, OFDA plans to conclude its work in Iraq in 2011. In addition, USAID’s Middle East Bureau/Office of Iraq Reconstruction (ME/IR) and USAID/Iraq at the U.S. embassy in Baghdad continue to support programs focusing on development assistance. Although not directly tied to current reintegration efforts, development assistance could improve conditions in Iraq that could increase the number of returns and foster reintegration, according to U.S. and international organization officials. As of September 30, 2009, USAID had obligated about $6.4 billion and expended about $5.6 billion in fiscal years 2003 through 2009 for development assistance projects in Iraq (see app. IV for funding by source and funding by implementing partner). For example, USAID’s Community Stabilization Program, completed in October 2009, offered employment activities, vocational training, small grants, and small infrastructure projects in communities affected by insurgent violence. USAID’s development assistance also supported programs focusing on building capacity for all levels of the government and other organizations. For example, in July 2006, USAID implemented the National Capacity Development Program, known in Arabic as Tatweer. The aim of this program is to increase the effectiveness of ministries by reforming internal operational systems and instituting best practices and international standards. The program is expected to end in January 2011. Tatweer works with 10 ministries, including MODM, and 7 executive offices. For MODM, Tatweer is providing assistance on capacity-building activities, including improvements to the information technology infrastructure and the management of relief supplies, according to USAID. Finally, DOD provides assistance in Iraq through its Commander’s Emergency Response Program. This program enables local commanders to respond to urgent humanitarian relief and reconstruction requirements within their areas of responsibility by carrying out programs that immediately assist the local population. According to DOD officials, although the program is not targeted to returns and reintegration, in some cases, relief and reconstruction are carried out in areas heavily populated by IDPs. DOD had obligated about $3.6 billion in fiscal years 2004 through 2009 for projects under the program in Iraq, including water and sanitation, health care, and other projects, according to DOD officials. The Iraqi government’s efforts to encourage returns and reintegration have been limited by insufficient political commitment and capacity within the government, according to international organization and U.S. government officials. The Iraqi government has developed policies and taken initial steps to assist IDPs and encourage voluntary returns and reintegration. MODM issued a National Policy on Displacement and the government issued a decree and orders that allow for financial stipends and assistance in safely recovering property. However, the international community has reported that MODM was not able to implement its policy, and that bureaucratic challenges, based on lack of capacity and political commitment at various levels of the Iraqi government, have prevented many returnees from recovering their property and receiving stipends. In June 2009, DOD reported that “serious efforts” to facilitate the return of refugees by the Iraqi government have been “all but non-existent.” MODM, a relatively new ministry, has lacked the authority and capacity to lead ministerial efforts regarding returns and reintegration, according to international organization and U.S. government officials. In July 2008, MODM issued a National Policy on Displacement, which recognized displacement as a key challenge facing the government of Iraq and the international community. The policy set a goal to find durable solutions for displaced Iraqis, established objectives, stressed the rights of displaced persons, described the basic needs of Iraqi IDPs, and recommended activities to address the needs. However, the policy was not fully implemented because MODM lacked the authority and capacity to coordinate efforts within the Iraqi government, according to international organization and U.S. government officials. According to officials, the more established ministries—such as Defense, Interior, Health, Education, and others—continued to work independently of MODM. Furthermore, MODM did not have uniform support at all levels of the government for the policy or for efforts to facilitate the return of refugees of all sects, according to international organization, NGO, and U.S. government officials. Moreover, MODM received a relatively small budget in 2008 because its role was originally viewed as primarily a coordinating rather than an implementing role, according to U.S. government and international organization officials. For 2009, the total Iraqi budget decreased, including that of MODM. According to the MODM Minister and U.S. government and international organization officials, the budgeted amount for 2009 was insufficient (see table 4), particularly since MODM began implementing programs and delivering services. However, according to officials, other ministries may be independently assisting IDPs and returnees through their own budgets and efforts. For example, the Ministry of Housing is planning to build shelters, according to officials. For 2010, the MODM budget was slightly higher than the amount that it expended in 2008 and more than triple the amount of its 2009 budget. The Iraqi government issued a decree and orders to facilitate certain returns and reintegration for some displaced Iraqis, primarily in Baghdad and Diyala (see table 5); however, progress has been limited. International organizations and NGOs have identified problems regarding this decree and these orders and their implementation. According to USIP, Decree 262 and Order 101 cover only a limited segment of the displaced population, require extensive documentation that returnees may have lost due to displacement, do not clarify the roles of the various agencies involved in the process, and do not dedicate resources for administration and oversight. USIP reported that by the end of 2008, about 10,000 returnee families had registered to receive the grant under Decree 262, but only a small number had received it. In January 2009, the volume of new cases dropped significantly, which according to UNHCR, IOM, and NGOs, may have been due to the low rate of payments. According to international organization and U.S. government officials, the amount of the stipends under these orders is insufficient to cover expenses and serve as an incentive for returns. In addition, IMC officials said the Iraqi government has not been proactive in providing squatters with the 6 months of rental assistance due under Decree 262. Moreover, according to OCHA and IMC, MODM issued a Ministerial Order on February 12, 2009 that precluded further registration of IDPs for benefits and refocused efforts on monitoring returnees. According to OCHA, the order sought to prevent double registrations and forgery; however, it may restrict legitimate IDPs’ access to benefits. OCHA further noted that the Ministerial Order may restrict unregistered IDPs’ ability to register as returnees and receive benefits under Decree 262 and Order 101, since they have to be registered as IDPs first to re-register as returnees. According to U.S. government officials, a key indicator for Iraqi government progress will be how the Iraqi government, at the central, governorate, and local levels, moves forward with its funding for and implementation of Order 54 regarding returns and reintegration in Diyala. The Iraqi government has made Diyala the focus of an initiative, led by the Follow-up Committee for National Reconciliation, to create conditions for large-scale IDP and refugee returns. According to State, the Iraqi government has pledged 37 billion dinar (about $30 million) for use by the Diyala Governor to reconstruct destroyed homes and pledged to provide 6- month contract jobs for up to 10,000 returnee families and 10,000 nonreturnee families. DOD reported in April 2010, that the Iraqi Security Forces continue to make progress in improving security in Diyala by eliminating insurgent support and thereby setting the conditions for economic recovery and return of displaced Iraqis. However, the perception of disproportionate targeting of Sunnis has strained sectarian relations, allowing Shi’a extremists and criminal elements much greater freedom of movement. Iraq, the United States, and other members of the international community lack an integrated strategy for the reintegration of displaced Iraqis. An effective strategy would be integrated and provide Iraq and its implementing partners with a tool to shape policies and programs so that stakeholders can achieve the desired results in an accountable and effective manner. International community stakeholders agree that to be effective, the strategy should be Iraqi-led with the assistance of the international community. The lack of an integrated strategy for reintegration resulted in a lack of agreed-upon strategic goals and outcomes, has hindered efforts to efficiently and effectively assess the needs of Iraqi IDPs and returnees, and has hindered stakeholder coordination and efficiency of service delivery. Iraq, the United States, and other members of the international community lack an integrated plan for reintegrating displaced Iraqis because Iraqi MODM planning efforts stalled due to limitations of authority, capacity, and broader Iraqi government support; the UN’s strategy and plans have primarily focused on assistance to the most vulnerable Iraqis and have not specifically focused on reintegration; and the current U.S. government strategy has not been made publicly available. An effective strategy would be integrated and provide Iraq and its implementing partners with a tool to shape policies and programs so that stakeholders can achieve the desired results in an accountable, efficient, and effective manner. According to international organization, U.S. government, and NGO officials, MODM does not have the authority, capacity, or Iraqi government support to implement its displacement policy and develop an effective strategy. In July 2008, MODM issued the National Policy on Displacement. This policy offers a general description of the problem, identifies basic goals, defines terms, stresses the rights of displaced persons, describes the basic needs of Iraqi IDPs, and recommends activities to mitigate some of the problems identified. The policy also calls for setting up a comprehensive, effective, and realistic workplan; providing adequate protection and assistance to displaced persons; specifying coordination structures among all state institutions; and allocating funds and developing financial procedures for the implementation of the policy. However, international organization, U.S. government, and NGO officials noted that MODM efforts have stalled because the ministry has had little authority or ability to coordinate efforts within the Iraqi government to implement the policy and develop an effective strategy. International and nongovernmental organization officials have expressed concern about the lack of unified Iraqi support for the policy and development of a strategy. UNHCR, IOM, and other actors will continue to build on the National Policy on Displacement as well as relevant legal authorities that we have described previously, according to the UN. According to State officials, the extent to which the Iraqi government implements Order 54, which focuses a range of efforts in Diyala and essentially makes Diyala a test case, may determine the future development of a viable strategy. Overall UN strategic efforts in Iraq have targeted humanitarian assistance to the most vulnerable Iraqis, which may or may not include IDPs and returnees, but are not specifically focused on reintegration. The United Nations 2008-2010 Iraq Assistance Strategy focuses on needs and planned assistance by sector, and although it occasionally mentions the impact of IDPs on sectors and includes a few broadly stated outputs regarding IDPs, it does not address reintegration. As part of its strategic approach, the UN issued its consolidated 2009 funding appeal for assistance efforts in Iraq and neighboring and other countries hosting refugees. The UN noted in its midyear review that, although IDPs were returning, large numbers of returns had not yet materialized and should not be encouraged. Thus, the UN continued to address the needs of vulnerable groups within the entire population and not to limit efforts to IDPs and returnees. In the UN’s 2010 appeal, the UN interspersed some new efforts intended to facilitate returns and reintegration while also assisting other vulnerable Iraqis. However, after making progress in consolidating its 2009 appeal, the UN divided its 2010 appeal into three planning documents, further fragmenting its initial planning efforts to address returns and reintegration. Additionally, although the initial planning efforts may include outputs, such as that at least 35 mobile teams and 14 Protection and Assistance Centers provide legal aid and monitor the needs of people of concern, they do not define reintegration (i.e., what is a sustainable return) or include specific indicators or outcomes for reintegration, as would be expected in an effective strategic plan. According to NSC, State, and USAID officials, the U.S. strategy regarding the reintegration of Iraqis is delineated in three classified or sensitive documents that have not been made available in a public document. Also, an unclassified version of the current U.S. strategy has not been developed and made public. Administration officials stated that the classified and sensitive documents were not drafted with the aim of creating a publicly announced strategy to persuade Iraqis to return home; rather, they are planning documents describing how to use U.S. assistance to ensure that Iraqis who choose to return to Iraq have support systems in place. In the absence of a publicly available strategy, administration officials stated that the United States will focus on the three efforts announced by the U.S. President in February 2009. The President stated that the administration would provide more assistance and take steps to generate international support for countries hosting refugees, cooperate with others to resettle refugees facing great personal risk, and work with the Iraqi government over time to resettle refugees and displaced Iraqis within Iraq. Subsequent to concluding our field work and our exit meetings with U.S. agencies and the NSC, in July 2010 NSC provided GAO with an unclassified summary of a classified May 2009 U.S. government strategy regarding support for returning Iraqi refugees and IDPs. While the summary was made available to GAO, it had not been made public. The NSC summary included a fiscal year 2010 objective to assist the Iraqi government, in coordination with international organizations and other donors, in developing a comprehensive strategy to support the reintegration of displaced Iraqis. The strategy was to include active participation of the Iraqi government line ministries. However, such a strategy was not developed. The NSC noted that the summary prepared for GAO was based on a “historical document,” should be viewed in that context, and that it had not been updated to reflect the current situation. Clearly defined and agreed-upon strategic goals and intended outcomes for reintegration have not been specifically developed. Strategic goals explain what results are expected and when they are expected. A direct alignment between strategic goals and strategies for achieving those goals is important for assessing an ability to achieve those goals. In the case of reintegrating displaced Iraqis, key parameters have not yet been agreed upon, which makes it difficult to establish measurable goals. For example, the international community has no agreed-upon determination of when the displacement it is addressing in Iraq began or when the displaced are considered reintegrated. MODM’s National Policy on Displacement includes a focus on an estimated 1.2 million Iraqis who were displaced over the 40 years before the fall of Saddam Hussein’s regime in 2003 and on an estimated more than 1.6 million who were internally displaced after; while the Iraqi government’s 2008 and 2009 orders and decree focus assistance to a limited segment of the displaced population. The U.S. government and some international organization programs are more focused on displacement since 2003, particularly the large displacement occurring after the February 2006 Samarra bombing. Similarly complex is agreeing upon when the displaced are considered reintegrated and international assistance is no longer required for reintegration. Reintegration is defined as “sustainable returns,” but a clear and uniform definition of “sustainable” in the context of Iraq has not been agreed to by the international community. MODM’s National Policy on Displacement defines durable solutions as based on three elements—long- term security, restitution of or compensation for lost property, and an environment that sustains life under normal economic and social conditions. Under the policy, the displaced may return to their home or place of habitual residence; integrate locally into the social, economic, cultural, and political fabric of the community where they initially found temporary refuge; or resettle in a new community. However, restitution of or compensation for lost property may occur long after the displaced return, integrate, or resettle, and there is little agreement on what constitutes normal economic and social conditions for Iraq. According to U.S. government officials, they plan to address the lack of clarity and agreement over definitions and parameters as they develop their plans to assist the Iraqi government in reintegration efforts. The lack of an integrated strategy has hindered efforts to efficiently and effectively assess the needs of Iraqi IDPs and returnees. A strategy for reintegration must include information on the needs of displaced Iraqis and be updated on the basis of the assistance provided and remaining needs. While various UN agencies, affiliated organizations, and their implementing partners have collected and assessed data for their specialized work in Iraq, gaps remain. In addition, the UN has not integrated data from UNHCR into its new Inter-Agency Information Analysis Unit, which was established to provide a central point for collecting and assessing needs-based data, according to a senior IAU official. Over time, the UN and its partners have individually attempted to identify and estimate the numbers of vulnerable Iraqis, internally displaced, returnees, and Iraqi and non-Iraqi refugees; survey returnees and IDPs on their reasons for leaving, immediate needs, and priority needs for return; document protection, property, livelihood, and governance issues; and determine the status of essential services across the country. However, international organization, NGO, and U.S. government officials stated that it was often difficult to identify the best data available because data from different sources did not always agree, some of the data were incomplete or outdated, or the methodology used to obtain and assess the reliability of some data was not clear. According to UNHCR, OCHA should have been coordinating the data collection and assessments from the beginning, but they did not initially have a presence in Iraq. As a result, each organization collected and assessed its own data, according to UNHCR. According to some officials, the Iraqi population has been over surveyed as a result of these separate assessments. According to international and nongovernmental organizations, gaps in information and data remain. For example: In 2009, UNHCR predicted that over 400,000 refugees would return to Iraq in 2009. The returns did not materialize and no further fact-based assessments and predictions on the rate of return have been made to facilitate planning efforts, according to UNHCR and U.S. government officials. According to international organizations, no inventory and analysis has been conducted of the various financial assistance programs available to IDPs and returnees to determine gaps, overlap, or impact. As a result, there is no assurance that resources are allocated in a rational and fair manner. Some international organizations provide returnees and vulnerable populations with cash, cash for work, and in-kind grants for business development. USIP reported that in addition to grants provided by the central government, ministries, provincial governments, and municipalities provide other forms of financial assistance and other specific funds for houses damaged in particular military operations, and that there are a variety of victims and martyrs commissions that provide other sorts of compensation. Despite efforts to improve outreach and surveys of vulnerable populations, some areas have not been accessible to international organizations and NGOs due to security concerns and lack of trained national staff. To begin to address this problem, in 2009, OCHA planned to inventory and train national NGOs through three workshops and subsequently carried out training inside Iraq for 74 Iraqi NGOs on humanitarian principles, rapid needs assessments, and results based management. By 2011, UNHCR plans to increase its presence in Iraq by relocating staff from Jordan and Kuwait; increasing its network of national NGOs across Iraq; and working through international NGO partners to provide support, oversight, and a review of the capacity of national NGOs to access areas, identify vulnerable populations, and provide assistance, particularly should security deteriorate. To address data gaps and overlap, in February 2008, the UN established the IAU in Amman, Jordan, under the direction of OCHA and the United Nations Assistance Mission for Iraq. According to a senior IAU official, the primary purpose of the IAU is to be a “one-stop shop” for collecting and providing data on Iraq and to ensure that the best data are available. The IAU is intended to bring together analysts from UN agencies and NGOs to facilitate and enhance data collection, sharing, analysis, and joint assessments; provide timely and accurate information on the situation and needs in the different areas of Iraq; and increase coordination to reduce project duplication and maximize the targeting of vulnerable communities. According to the senior IAU official, in the spring of 2010, the UN Country Team established a new steering committee composed of agency heads that met for the first time to set priorities and develop a work plan for the IAU. As of July 2010, the IAU has staff in Jordan and Iraq, including new governate-based Information Management Offices. According to the senior IAU official, the IAU now receives data and assessments from most organizations conducting work in Iraq, have analysts from most of the major contributors as part of their team, and help plan for and coordinate future surveys. Through an agreement between the UN and the U.S. government, the IAU will also begin to analyze declassified U.S. databases and share information. However, according to the senior IAU official, although UNHCR is a participating agency of the IAU, is a member of the UN Country Team, and shares it reports with the IAU, UNHCR is not fully participating in the IAU. UNHCR is not sharing the raw or primary data it collects on IDPs, returnees, and vulnerable populations; its methodology and data limitations; and an analyst to work with the IAU team and is not taking advantage of IAU resources and coordination. The IAU official stated that as a result, UNHCR issues are not on the agenda, and other agencies are unaware of the composition and quality of UNHCR data. The official added that UNHCR is not taking advantage of IAU staff expertise and lessons learned on how to implement surveys using NGOs and how to scrub and assess raw data. For example, UNHCR conducted a survey of returnees but has not shared its questionnaire and raw data with the IAU. Moreover, UNHCR is not involved in planning future surveys, such as a major activity of the IAU this year, which is to work with the Iraqi government and civil society to develop a socioeconomic monitoring system for Iraq within the Central Organization for Statistics and Information Technology and the Kurdistan Regional Statistic Office, according to the IAU official. According to UNHCR and IAU officials, UNHCR had initially assigned an analyst to the IAU, but has not refilled the position since the staff member left it in 2009. According to the IAU official, UNHCR informed the IAU that it had abolished the position because it did not have a qualified staff member to detail to the IAU. UNHCR officials stated that they found little added value from having a staff person detailed to the IAU. Without an integrated strategy, it is difficult for stakeholders to effectively delineate roles and responsibilities and establish coordination and oversight mechanisms for effective and efficient implementation. The MODM Minister stated that his ministry’s initial role was limited to that of a coordinating body, leaving no single entity charged with implementing the necessary tasks. The Minister added that although the Ministries of Health, Education, Interior, and Defense are essential to addressing impediments to returns, they do not have programs specifically focusing on IDPs. Roles and efforts among international organizations may overlap, particularly since organizations plan their work independently of each other and work bilaterally with local leaders, the Iraqi government, and donor country agencies. According to international and NGO officials, decreasing international donor community contributions to these organizations has caused them to compete for funding and trained national staff. At UN Country Team meetings and UN Assistance Mission for Iraq activities, officials at one agency stated that while some information is shared, organizations “protect their turf,” and opportunities to build on the efforts of others are lost. According to IOM and UNHCR, although organizations try to avoid conflicts by focusing their efforts in different sectors—such as UNHCR focusing its projects on shelter and property issues, IOM focusing on livelihood projects, and WFP focusing on delivery of food—efforts may overlap. For example, WFP is expanding its focus in Iraq to include livelihood projects. According to IMC, coordinating committees are prolific in Iraq but they are not always effective. For example, according to IMC, IMC and USAID/OFDA were working on shelter rehabilitation in one area, only to find out from field staff that UNHCR was doing similar work. According to IMC officials, they have been involved in the UN sector outcome teams, but the meetings were generally held in Amman without an Iraqi government presence, thereby limiting effective coordination. One area with significant potential for overlap is the establishment of numerous assistance centers and mobile units across Iraq to register or assist returnees, IDPs, and vulnerable Iraqis. International and U.S. government officials expressed concerns about the need for multiple centers, possible inefficiencies, and extent to which the MODM will be capable of assuming responsibilities for centers in the future. Although each center initially had its own purpose, some of the activities at these centers now overlap, and all require oversight and administrative support, according to international organization officials. A number of these centers are funded by State and USAID and managed or supported by MODM, UNHCR, IMC, and IOM. A sample of these centers includes the following: MODM Return and Assistance Centers: According to UNHCR, as of July 2010, MODM had established three main Return and Assistance Centers— two in Baghdad (Karkh and Resafa) and one in Diyala—to register and assist displaced Iraqis who want to return to their original homes. In addition, each of the 14 MODM branches outside of Baghdad and Diyala has a Registration Department where the same functions are performed. The centers register new arrivals, streamline returnee access to assistance, offer returning Iraqis legal aid and advice, assist in resolving property disputes, help replace lost documents, and help with access to MODM and government benefits. IMC supports the Karkh and Diyala centers with funding from USAID and strategic guidance from UNHCR. According to an IMC official, IMC is essentially comanaging the centers at MODM’s request because MODM lacks trained staff. IMC also supports some of the MODM Registration Departments. UNHCR, with State’s PRM funding, supports operations of the Resafa center, including its mobile teams, and supports two of the Karkh center mobile teams. According to UNHCR, although it is not ideal to have a medical NGO co-operating the centers, IMC was one of the few UNHCR partners and international NGOs positioned in Iraq when the centers were established. UNHCR Protection and Assistance Centers: As of March 2010, UNHCR had established and continued to operate 15 Protection and Assistance Centers and 40 associated mobile teams that provide services to displaced, returning, and vulnerable Iraqis and others in all 18 governorates in Iraq. As of May 2010, the centers included a total of 125 staff, including lawyers, social workers, monitors, and public information and database officers. The centers conduct protection monitoring assessments to identify needs, gather information, and identify opportune interventions regarding basic human rights and physical security; provide legal assistance addressing a broad spectrum of needs, including legal counseling and interventions and access to services, documentation, and compensation; provide assistance and referrals to services and other stakeholders, such as authorities, NGOs, UNHCR, or other Protection and Assistance Centers; and provide briefings and information sessions to raise awareness of protection needs. UNHCR Return Integration Community Centers: In mid-2009, UNHCR established and began operating 12 Return Integration Community Centers to expand its capacity to reach out to return communities. The centers coordinate with and relay information to local communities; conduct needs assessments; and address the social, assistance, and information needs of displaced and returning IDPs and refugees. Six of the centers are based in Baghdad and the others are based in Anbar, Basrah, Diyala, Kirkuk, Missan, and Ninewa. UNHCR plans to increase the number of these centers to at least 16 in 2010. As of May 2010, these centers included a total of 159 staff. IOM Community Outreach and Women Centers: IOM and its partners are establishing four Women Centers with funding from State. The centers will provide legal aid, psychosocial support, health counseling, and livelihood support to the most vulnerable IDP and returnee female-headed households in Baghdad, Diyala, and Missan. After our fieldwork discussions with UNHCR and U.S. government officials, UNHCR informed us in June 2010 that it was taking action to address the multiple assistance centers and potential for duplication and lost efficiencies. First, UNHCR informed us that it had agreed to merge all Protection and Assistance Centers and Return Integration Community Centers in 2011 to reduce administrative costs. Second, UNHCR, in discussions with the U.S. Embassy, suggested that all MODM Return and Assistance Center activities be placed under one management umbrella. According to UNHCR, doing so would enable them to have a more harmonized approach that would avoid potential confusion and duplication. UNHCR also stated that this approach will provide it with the opportunity to harmonize staff payments and incentives. It is in the U.S. government’s interest to work with Iraq and international community stakeholders to develop an integrated international strategy for reintegrating displaced Iraqis that transitions efforts and costs over time to the Iraqi government. First, Iraq is a sovereign nation that should lead efforts to address impediments to the return and reintegration of all displaced Iraqis. Second, in MODM’s National Policy on Displacement, the Iraqi government states that it cannot address this issue without the help of the international community. Third, in fiscal year 2009, the United States funded more than one-half of the humanitarian assistance provided to Iraq, and the lack of an international strategy may result in lost efficiencies and wasted funds. One possible example of this may be the administration of many assistance centers and mobile units across Iraq. Furthermore, President Obama stated in his February 2009 speech on responsibly ending the war in Iraq that the United States will pursue a transition to Iraq and that the United States has a moral responsibility to help displaced Iraqis. We recognize that strategies themselves are not end points, but starting points, and that implementation is the key. However, an integrated strategy—along with transparent goals and shared, accurate data on the conditions and effectiveness of projects—is useful in suggesting ways to enhance the value of plans, filling in gaps, speeding implementation, guiding resource allocations, and providing oversight opportunities. To enhance the ability of the Iraqi and U.S. governments, international organizations, and NGOs to effectively plan and integrate their efforts to assist and reintegrate displaced Iraqis, we recommend that the Secretary of State and the USAID Administrator work with the appropriate international organizations to assist the Iraqi government in developing an international strategy that addresses impediments to return and prepares for and facilitates the return and reintegration of displaced Iraqis. To ensure that the U.S. goals and plans are fully integrated with those of Iraq and other international community stakeholders and that progress toward meeting those goals is transparent, we recommend that the Secretary of State and USAID Administrator make public an unclassified version of the current U.S. strategy and their implementing plans for assisting and reintegrating displaced Iraqis, including their goals, performance measures, and progress assessments. To ensure that the U.S. and Iraqi governments, other donors, international organizations, and implementing partners have the best data available regarding the numbers and needs of IDPs, returnees, and other vulnerable Iraqis, in the most efficient manner, we recommend that the Secretary of State encourage UNHCR to share its raw data and methodology with the IAU and take advantage of IAU expertise and coordinated efforts. To ensure the effective and efficient use of resources by its implementing partners, we recommend that the Secretary of State and USAID Administrator work with UNHCR and its other implementing partners to take inventory of and assess the purposes, organization, operations, and results of the various assistance, return, and registration centers and mobile units in Iraq to determine and achieve an optimal framework for assisting IDPs, returnees, and other vulnerable Iraqis. We provided a draft of this report to the Departments of State and Defense and USAID. State and USAID provided written comments, which are reprinted in appendixes V and VI. DOD provided oral comments which are summarized below. State and DOD also provided technical comments, which we incorporated where appropriate. In commenting on a draft of this report, State and USAID agreed with our recommendations regarding the need to assist the Iraqi government in developing an international strategy for reintegrating displaced Iraqis and to make public an unclassified version of the current U.S. strategy and their implementing plans. State and USAID also agreed with our recommendation regarding the need to work with UNHCR and other implementing partners to take inventory of and assess the various assistance, return, and registration centers and mobile units to determine and achieve an optimal framework. USAID and State noted that efforts to address this recommendation have begun. According to State, UNHCR has begun to consolidate services and plan the merger of centers. State also agreed with our recommendation regarding the need to encourage UNHCR to share its raw data and methodology with the IAU and take advantage of IAU expertise and coordinated efforts. In addition, DOD commented that it agreed with the report and supports State and USAID in the execution of their mission to assist and reintegrate displaced Iraqis. We will send copies of this report to interested congressional committees, the Secretary of State, the Administrator of USAID, and the Secretary of Defense. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8979 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. To examine efforts to reintegrate displaced Iraqis, we reviewed (1) the conditions in Iraq that pose a challenge to their reintegration; (2) the actions that the United States, Iraq, and other members of the international community have taken to address these conditions and reintegration; and (3) the extent to which the United States, Iraq, and other members of the international community have an effective strategy for reintegrating displaced Iraqis. When reintegration challenges and efforts were intertwined with efforts to assist internally displaced and vulnerable Iraqis, we included both in our scope. We conducted fieldwork in Washington, D.C.; New York City (United Nations (UN) agencies); Geneva, Switzerland (United Nations High Commissioner for Refugees (UNHCR), International Organization for Migration (IOM), and other international organization headquarters); and Iraq. We also conducted telephone interviews with UN officials in Amman, Jordan, that were responsible for work in Iraq. Within the U.S. government, we reviewed documents and interviewed officials of the National Security Council’s (NSC) Office of Multilateral Affairs and Human Rights; Department of State’s (State) Bureau of Population, Refugees, and Migration (PRM) and Bureau of Near Eastern Affairs; Department of Defense’s Office of the Secretary of Defense and Joint Staff; the U.S. Agency for International Development’s (USAID) Office of Foreign Disaster Assistance (OFDA) and Middle East/Iraq Reconstruction Office; the Central Intelligence Agency; the U.S. missions in New York and Geneva; the U.S. embassy and USAID mission in Baghdad, Iraq; and the Multi-National Force-Iraq representative to the U.S. Embassy’s IDP Working Group. Within the Iraq government, we interviewed the Iraqi Minister of Displacement and Migration and reviewed Iraqi government and ministerial documents, including publicly available reported numbers of IDPs and returnees. We toured the facilities and interviewed Iraqi and nongovernmental organization (NGO) officials at the Karkh Return and Assistance Center in Baghdad, Iraq. We interviewed officials and reviewed documents from international organizations, including the UNHCR, IOM, UN Office for the Coordination of Humanitarian Affairs, International Committee of the Red Cross, UN Department of Political Affairs, UN Development Programme, World Heath Organization, World Food Program, and UN Children’s Fund (UNICEF). We also reviewed documents from the UN Human Settlements Programme (UN-HABITAT). With the assistance of Interaction in the United States and the International Council of Voluntary Agencies in Geneva, Switzerland, we held discussion groups with international NGOs that had, have, or plan to have a presence in Iraq to discuss challenges to reintegration, actions taken and planned, and gaps remaining to be addressed. We interviewed and reviewed studies and papers from research institutes and advocacy groups, such as the Brookings Institute’s Brookings-Bern Project on Internal Displacement, the Norwegian Refugee Council’s Internal Displacement Monitoring Centre, Refugees International, Human Rights First, and the U.S. Institute of Peace. To identify conditions that pose a challenge to reintegrating displaced Iraqis, we reviewed research papers and assessments; strategies and policy papers; program implementation, monitoring, and progress reports; and related documents and interviewed officials from the U.S. and Iraqi governments, international organizations, NGOs, and research institutes. We filtered challenges by considering factors, such as their significance and the degree to which they could be generalized, and then grouped them by category. We documented evidence from multiple sources and validated it with knowledgeable U.S., UN, IOM, and NGO officials to ensure accuracy. In addition, we also considered data compiled by IOM through the assessments and surveys that it has conducted of Iraqi IDPs and returnees since 2006. To determine the reliability of IOM data on conditions in Iraq, we interviewed officials from IOM, USAID, PRM, the U.S. Embassy in Baghdad, and the Brookings Institution and reviewed IOM’s data collection methodology and reports. The 2009 assessments of internally displaced persons (IDP) covered more than 80 percent of the estimated total of about 270,000 IDP families; however, it cannot be generalized to the population of all IDPs. The 2009 survey of identified returnee families was based on a sample of 4,061 of the 58,110 returnee families. The survey cannot be generalized to all returnee families because it relied on a mixture of random and judgmental sampling methods and had a low response rate. These two data sources cannot be directly compared because of their different populations, data collection methods, and sample sizes. We determined that in conjunction with testimonial and documentary evidence, the IOM data are sufficiently reliable to describe the conditions that impede reintegration for those surveyed, but that the data cannot be used to make inferences to the larger IDP and returnee populations in Iraq. To identify the actions that the United States, Iraq, and the international community have taken to address these conditions, we reviewed policy, strategy, planning, and funding documents; UN funding appeals; monitoring and progress reports; and related documents and interviewed officials from the U.S. and Iraqi governments, international organizations, NGOs, and research institutes. We reviewed U.S. agency-reported amounts obligated, and expended for fiscal years 2003 through 2009, as of September 30, 2009, for humanitarian assistance and development assistance. State provided us with funding data from its Abacus database and Global Financial Management System. USAID provided data from its Phoenix database. We checked data provided against the source database printouts and discussed data reliability with agency officials. To verify our summarization of the funding and associated data, we sent out draft tables to agency contributors, resolved discrepancies, and made supported changes. We found the funding data from State and USAID to be sufficiently reliable for the purposes of this report. The Army Budget Office provided the amounts obligated for fiscal years 2003 through 2009, as of September 30, 2009, for the Commander’s Emergency Response Program from the Iraq Reconstruction Management System. Based on prior work and data reliability assessments, we found the Army’s funding data to be sufficiently reliable for the purposes of this report. To determine the extent to which the United States, Iraq, and other members of the international community have an effective strategy to address the reintegration of displaced Iraqis, we reviewed policy, strategy, and planning documents from the U.S. and Iraqi governments, the UN, and IOM. We interviewed U.S. agency, Iraqi government, international organization, NGO, and research institution officials and reviewed their documents to determine issues and problems resulting from the lack of a strategy. We documented evidence from multiple sources and validated it with knowledgeable U.S., UN, IOM, and NGO officials to ensure accuracy. We conducted this performance audit from March 2009 to December 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Roles and Responsibilities of Key U.S. and Iraqi Government and International Community Entities Addressing Iraqi Displacement The NSC’s Senior Director for Multilateral Affairs and Human Rights serves as the coordinator for U.S. government efforts addressing assistance, repatriation and reintegration, and resettlement for displaced Iraqis. This position was first filled in August 2009. The Senior Coordinator, a senior Foreign Service officer stationed at the U.S. embassy in Baghdad, is responsible for coordinating U.S. government efforts in Iraq that address Iraqi displacement. The Senior Coordinator is also responsible for representing the United States in its dealings with the Iraqi government, the international community, and nongovernmental organizations (NGO) on displacement issues. This position was established by Public Law 110- 181 § 1245 (2008) and first filled in July 2008. PRM is responsible for coordinating protection, humanitarian assistance, and resettlement for refugees and conflict victims; is the lead U.S. agency interface with international organizations and NGOs on refugee issues; funds implementing international organization and NGO partners, such as the United Nations High Commissioner for Refugees (UNHCR), for assistance to refugees and internally displaced persons (IDP); and formulates U.S. foreign policy on population issues and international migration. To protect and assist Iraqi refugees and returnees, PRM works with the NSC, United States Agency for International Development (USAID), regional bureaus, and U.S. missions to provide guidance to its international organization and NGO implementing partners and to engage with donor countries and countries hosting Iraqi refugees. The Bureau of Near Eastern Affairs advises on and develops policy for the assistance and reintegration of displaced Iraqis. The bureau participates in the NSC’s interagency planning committees. OFDA funds and oversees a wide range of humanitarian assistance activities that are implemented by a number of NGO and United Nations (UN) partners who provide humanitarian assistance programs for IDPs and other vulnerable Iraqis. OFDA coordinates these relief efforts with other USAID offices, State, and governmental and nongovernmental organizations and agencies in Iraq. ME/IR funds and oversees implementing partners—primarily, private contractors, NGOs, and international organizations—that implement programs focusing on economic development and capacity building at all levels of the Iraqi government. The USAID Iraq Mission, located in Baghdad, works closely with coalition forces and other U.S. government agencies; international institutions, such as the UN and World Bank; Iraq’s national, provincial, and local governments; and a network of partners that include NGOs, local community groups, and Iraqi citizens to implement USAID’s development programs. MNF-I worked to improve security conditions and maintain stability for all Iraqis and provided security for U.S. and UN officials that enabled them to safely assist Iraqis. U.S. Forces-Iraq (replacing MNF-I on Jan. 1, 2010) negotiates with tribal leaders, trains Iraqi Security Forces (ISF), and assists Provincial Reconstruction Teams as they build essential services for the Iraqi people, including IDPs. DOD personnel have provided support, such as information sharing, to USAID, State, and international organizations to address displacement. DOD participates in the National Security Council’s interagency planning committees. A political advisor to the Prime Minister of Iraq was appointed by the Prime Minister to also serve as the Iraqi government’s coordinator for Iraqi refugee and IDP issues in September 2009. MODM was established as a coordinating body within the Iraqi government ministries on displacement issues. As of 2008, MODM was empowered to provide additional grants and establish centers to receive and register displaced and returning Iraqis. The Implementation and Follow-up Committee plays a lead role in promoting reconciliation between Sunnis (in particular, those that were associated with the Saddam regime) and Iraq’s Shiite majority and chairs efforts for reconciliation and reintegration in Diyala. The Iraqi Security Forces, in addition to providing general security, are also responsible for enforcing laws and government orders designed to assist displaced Iraqis, such as evicting squatters from homes owned by displaced Iraqis. UNHCR has a global mandate to lead and coordinate international action for the protection of refugees and stateless people and to find lasting solutions to their plight. UNHCR coordinates efforts with the Iraqi government and works to reintegrate displaced Iraqis. UNHCR chairs the UN Country Team addressing protection and co-chairs the team addressing shelter in Iraq. UNHCR provides protection, shelter, and emergency assistance to IDPs, refugees, and returnees. OCHA, on behalf of the UN Humanitarian Coordinator for Iraq, mobilizes and coordinates humanitarian action in Iraq. OCHA is responsible for information management and analysis, advocacy and public information, resource mobilization and management, disaster preparedness and response, and protection. OCHA works in partnership with UN agencies, international organizations, and NGOs. The consolidated appeal process for Iraq was led by OCHA. Through the Inter-Agency Information and Analysis Unit (IAU), OCHA collates and analyzes data on the humanitarian situation to create and disseminate information products, such as maps, charts and graphs, reports and assessments, and contact information, and maintains the OCHA and IAU Web sites to share these products. IOM is an intergovernmental organization that works on migration issues worldwide. IOM, in partnership with other international organizations and the Iraqi government, conducts a wide range of activities in Iraq, such as building capacity in certain Iraqi ministries, monitoring and providing emergency assistance to IDPs and other vulnerable groups, and assisting efforts to redress property rights. IOM is also a member of the UN Country Team. ICRC is an international organization that works to ensure humanitarian protection and assistance to victims of war and other situations of violence worldwide. ICRC has a permanent mandate founded under international law to take impartial action for persons affected by conflict. ICRC provides relief assistance to IDPs and other vulnerable groups inside Iraq. It also has assessed the detention and treatment conditions of detainees, provided medical supplies to hospitals, and rehabilitated existing water and sanitation infrastructure, among other things. International and national NGOs conduct significant efforts in Iraq for the benefit of IDPs, returnees, and all vulnerable Iraqis. A number of Irai government ministries are relevant to MODM’s efforts, such as the Ministries of Finance, Planning and Development Cooperation, Trade, Health, Education, Interior, and Defense, among others. The UN Country Team, which includes UNHCR, OCHA, and IOM, works to coordinate UN efforts and to provide assistance in myriad areas in Ira that may directly or indirectly address Irai displacement. Other UN Country Team members include the Economic and Social Commission for Western Asia; Food and Agriculture Organization; International Labour Organization; Office of the UN High Commissioner for Human Rights; UN Development Programme; UN Environment Programme; UN Education, Science and Cultural Organization; UN Population Fund; UN Centre for Human Settlements; UN Children’s Fund; UN Industrial Development Organization; UN Fund for Advancement of Women; UN Office for Project Services; the World Food Program; and the World Health Organization. IOM and the World Bank are affiliated bodies of the UN Country Team, not UN organizations. Appendix III: U.S. Funds Obligated and Expended for Iraq-Related Humanitarian Assistance Projects, and Intended Beneficiaries, Fiscal Years 2003-2009, as of September 30, 2009 State/Population, Refugees, and Migration Bureau (PRM) USAID/Office of U.S. Foreign Disaster Assistance (OFDA) In region: Government of Jordan to meet the needs of Iraqi refugees and host country population In Iraq: IDPs and vulnerable populations may include other Irais at risk; Irais who have returned from other countries; refugees in Ira from other countries, such as Palestinians; and other conflict victims. i refugees in the region do not allow donors to provide assistance solely to Irai refugees. In some cases, a portion of the funds was contributed to international organizations that may have spent the funds in one or a number of the countries hosting Irais in the region. Host countries receiving assistance include Syria, Jordan, Lebanon, Egypt, Turkey, and Iran. According to USAID, in 2003, USAID’s Food for Peace Program received $191.1 million, which was reallocated from funds originally appropriated in P.L. 108-7 to Development Assistance, Economic Support Fund, Child Survival and Health, and International Disaster and Famine Assistance accounts. The U.S. Emergency Refugee and Migration Assistance fund is drawn upon by the President to meet unexpected urgent refugee and migration needs whenever the President determines that it is in the U.S. national interest to do so. Funds are appropriated annually to this fund and remain available until expended. In fiscal years 2003 through 2009, the U.S. government, through USAID’s Middle East Bureau’s Office of Iraq Reconstruction (ME/IR), obligated about $6.4 billion and expended about $5.6 billion for development assistance projects in Iraq (see table 6). The USAID Iraq Mission, located in Baghdad, worked with USAID’s partners to implement these projects (see table 7). The intended beneficiaries of these activities included local Iraqi NGOs, local and regional government entities, provincial directorates, local courts, universities, local media outlets, the Independent Higher Electoral Commission, community action groups, victims of coalition operations, and ministries at the national and provincial levels. In addition, Audrey Solis, Assistant Director; Martin De Alteriis; Farhanaz Kermalli; Gilbert Kim; Heather Latta; Kathleen Monahan; and Mary Moutsos made key contributions to this report. Additional assistance was provided by Todd Anderson, Gergana Danailova-Trainor, Karen Deans, Timothy DiNapoli, Walker Fullerton, Cheron Green, Emily Gupta, Bruce Kutnick, Charlotte Moore, Christopher Mulkins, Diahanna Post, and Gwyneth Woolwine. | The estimated number of Iraqis who have been internally displaced since February 2006 is about 1.6 million, and numerous Iraqis are in neighboring countries. Tens of thousands of Iraqi families have returned home and the number is slowly increasing. GAO examined (1) conditions in Iraq that pose a challenge to the reintegration of displaced Iraqis, (2) actions the international community is taking to address these conditions and reintegration, and (3) the extent to which the international community has an effective reintegration strategy. GAO analyzed reports and data, met with officials from the U.S. and Iraqi governments and international and nongovernmental organizations, and did fieldwork in Geneva and Baghdad. Several issues impede the return and reintegration of displaced Iraqis. Although the overall security situation in Iraq has improved since 2006, the actual and perceived threat across governorates and neighborhoods continues to impede Iraqi returns and reintegration. Problems in securing property restitution or compensation and shelter have made it difficult to return and reintegrate. The International Organization for Migration (IOM) reported that 43 percent of the internally displaced that it surveyed did not have access to their homes, primarily because their property was occupied or destroyed. IOM also reported that one-third of the heads of returnee families it assessed were unemployed. Iraq continues to lack adequate access to essential services--that is, food, water, sanitation, electricity, health services, and education. Moreover, insufficient government capacity and commitment cross over each of the problem areas and serve as a deterrent to returns and reintegration. The international community has taken action to address the impediments that displaced Iraqis face, but the extent to which these efforts will result in reintegration of displaced Iraqis is uncertain. International and nongovernmental organizations, supported by U.S. and other donor funding, have initiated projects. However, the extent to which these projects specifically target and affect reintegration is not consistently measured. The Iraqi government has initiated efforts to encourage returns and reintegration. However, progress in this area has been limited due to insufficient commitment and capacity, according to international and U.S. officials. Iraq, the United States, and other members of the international community do not have an integrated international strategy for the reintegration of displaced Iraqis. The international community lacks integrated plans because Iraqi Ministry of Displacement and Migration planning efforts stalled due to limitations of authority, capacity, and broader Iraqi government support, according to U.S. and international officials; the United Nation's (UN) strategy and plans have not specifically focused on reintegration; and an unclassified version of the current U.S. government strategy has not been made publicly available. This situation has hindered efforts to efficiently assess the needs of internally displaced Iraqis and returnees. Moreover, the international community has not yet reached an agreement on goals and expected outcomes for reintegration. Also, the UN has not integrated data on returnee needs from the UN High Commissioner for Refugees (UNHCR) into its new Inter-Agency Information and Analysis Unit (IAU), which was established to provide a central point for collecting and assessing data, and UNHCR is not taking advantage of IAU resources and coordination efforts. Furthermore, it is difficult for stakeholders to effectively delineate roles and responsibilities and establish coordination and oversight mechanisms. One area with significant potential for inefficiencies is in the establishment and operation of numerous assistance centers and mobile units across Iraq by various entities to assist returnees, the internally displaced, and other vulnerable Iraqis. GAO recommends that (1) the Secretary of State (State) and U.S. Agency for International Development (USAID) Administrator assist Iraq in developing an effective integrated international strategy for reintegrating displaced Iraqis; (2) State and USAID make publicly available an unclassified version of the current U.S. strategy; (3) State encourage UNHCR to share primary data collected and take advantage of the IAU efforts; and (4) State and USAID work with UNHCR and others to inventory and assess the operations of the various assistance centers to determine and achieve an optimal framework. The Department of State and USAID concurred with our recommendations. |
While the IGs are designed to focus primarily on exposing fraud, waste, and abuse in individual federal agency programs, GAO’s broad audit authority allows us to support Congress through strategic analyses of issues that cut across multiple federal agencies and sources of funding. Although the IGs report to the heads of their respective departments and make periodic reports to Congress, GAO reports directly to Congress on a continuous basis. GAO consults regularly with its oversight committees and relevant committees of jurisdiction regarding key issues of national importance, such as U.S. fiscal solvency, emergency preparedness, DOD transformation, global competitiveness, and emerging health care and other challenges for the 21st century. The Congress established the GAO in 1921 to investigate all matters relating to the receipt, disbursement, and application of public funds. Since then, Congress has expanded GAO’s statutory authorities and frequently calls upon it to examine federal programs and their performance, conduct financial and management audits, perform policy analysis, provide legal opinions, adjudicate bid protests, and conduct investigations. In 2006, the GAO issued more than 1,000 audit products and produced a $105 return for each dollar invested in the agency. GAO has developed substantial expertise on security and reconstruction issues, as well as having long-term relationships with State, Defense, and USAID. Our work spans several decades and includes evaluations of U.S. military and diplomatic programs and activities, including those during and following contingency operations in Vietnam, the Persian Gulf (Operations Desert Shield and Storm), Bosnia, and Afghanistan. We also have many years of expertise in evaluating U.S. efforts to help stabilize regions or countries; we have, for example, monitored U.S. assistance programs in Asia, Central America, and Africa. The depth and breadth of our work and the expertise we have built has helped facilitate our ability to quickly gather facts and provide insights to the Congress as events unfold, such as the conflict in Iraq. Our current work draws on our past work and regular site visits to Iraq and the surrounding region, such as Jordan and Kuwait. Furthermore, we plan to establish a presence in Iraq beginning in March 2007 to provide additional oversight of issues deemed important to Congress. Our plans, however, are subject to adequate fiscal 2007 funding of GAO by the Congress. Our work in Iraq spans the three prongs of the U.S. national strategy in Iraq—security, political, and economic. The broad, cross-cutting nature of our work helps minimize the possibility of overlap and duplication by individual IGs. We and other accountability organizations take steps to coordinate our oversight with others to avoid duplication and leverage our resources. In that regard, the ability of the Special Inspector General for Iraq Reconstruction (SIGIR) to provide in-country oversight of specific projects and reconstruction challenges has enabled us to focus our work on more strategic and cross-cutting national, sector, and interagency issues. The expansion of SIGIR’s authority underscores the need for close coordination. We coordinate our work in Iraq through various forums, including the Iraq Inspectors General Council (IIGC) and regular discussions with the IG community. Established by what is now SIGIR, IIGC provides a forum for discussion and collaboration among the IG and staff at the many agencies involved in Iraq reconstruction activities. Our work is coordinated through regular one-on-one meetings with SIGIR, DOD, State, and USAID. We also coordinate our work with other accountability organizations, such as the Federal Bureau of Investigation’s (FBI) public corruption unit. Let me highlight some of the key findings and recommendations we have made as a result of our continuing work in Iraq. In November 2005, the National Security Council issued the National Strategy for Victory in Iraq (NSVI) to clarify the President’s strategy for achieving U.S. political, security, and economic goals in Iraq. The U.S. goals included establishing a peaceful, stable, and secure Iraq. Our July 2006 report assessed the extent to which the NSVI and its supporting documents addressed the six characteristics of an effective national strategy. While we reported that the NSVI was an improvement over previous U.S. planning efforts for stabilizing and rebuilding Iraq, we concluded that the strategy fell short in at least three key areas. First, it only partially identified the agencies responsible for implementing key aspects of the strategy. Second, it did not fully address how the United States will integrate its goals with those of the Iraqis and the international community, and it did not detail Iraq’s anticipated contribution to its future needs. Third, it only partially identified the current and future costs of U.S. involvement in Iraq, including maintaining U.S. military operations, building Iraqi government capacity, and rebuilding critical infrastructure. We recommended that the NSC improve the current strategy by articulating clear roles and responsibilities, specifying future contributions, and identifying current costs and future resources. In addition, our report urged the United States, Iraq, and the international community to (1) enhance support capabilities of the Iraqi security forces, (2) improve the capabilities of the national and provincial governments, and (3) develop a comprehensive anti-corruption strategy. In our view, congressional review of the President’s 2007 plan for Iraq should consider whether it addresses the key elements of a sound national strategy identified in our July 2006 report. In October 2005, we issued a classified report on the military’s campaign plan for Iraq. In that report, we discussed the military’s counterinsurgency plan for Iraq and the conditions and phases in the plan. The report contained a recommendation to link economic, governance, and security indicators to conditions for stabilizing Iraq. Congress acted on our recommendation in the 2006 National Defense Authorization Act and required DOD to report on progress toward meeting the conditions referred to in GAO’s report. We have supplemented this work with a series of classified briefings to the Congress on changes to the campaign plan and U.S. efforts to train and equip Iraqi security forces and protect weapons caches throughout Iraq. We will continue to provide Congress these classified briefings. Since 2001, Congress has appropriated about $495 billion to U.S. agencies for military and diplomatic efforts in support of the global war on terrorism; the majority of this amount has gone to stabilize and rebuild Iraq. Efforts in Iraq involve various activities such as combating insurgents, conducting civil affairs, building capacity, reconstructing infrastructure, and training Iraqi military forces. To date, the United States has reported substantial costs for Iraq and can expect to incur significant costs in the foreseeable future, requiring decision-makers to consider difficult trade-offs as the nation faces an increasing number of long-range fiscal challenges. Funding for these efforts has been provided through annual appropriations, as well as supplemental appropriations that are outside the annual budget process. In our view, moving more funding into baseline budgets, particularly for DOD, would enable decision-makers to better weigh priorities and assess trade-offs. As of September 30, 2006, DOD had reported costs of about $257.5 billion for military operations in Iraq. In addition, as of October 2006, about $29 billion had been obligated for Iraqi reconstruction and stabilization efforts. However, problems with the processes for recording and reporting GWOT costs raise concerns that these data may not accurately reflect the true dollar value of war-related costs. U.S. military and diplomatic commitments in Iraq will continue for the foreseeable future and are likely to involve hundreds of billions of additional dollars. The magnitude of future costs will depend on several direct and indirect variables and, in some cases, decisions that have not been made. DOD’s future costs will likely be affected by the pace and duration of operations, the types of facilities needed to support troops overseas, redeployment plans, and the amount of military equipment to be repaired or replaced. Although reducing the number of troops would appear to lower costs, we have seen from previous operations in the Balkans and Kosovo that costs could rise—if, for example, increased numbers of contractors replace military personnel. With activities likely to continue into the foreseeable future, decision-makers will have to carefully weigh priorities and make difficult decisions when budgeting for future costs. Over the years, we have made a series of recommendations to the Secretary of Defense intended to improve the transparency and reliability of DOD’s GWOT obligation data, including recommendations that DOD (1) revise the cost-reporting guidance so that large amounts of reported obligations are not shown in “miscellaneous” categories, and (2) take steps to ensure that reported GWOT obligations are reliable. We also have recommended that DOD build more funding into the baseline budget once an operation reaches a known level of effort and costs are more predictable. In response, the department has implemented many of our previous recommendations. Overall security conditions in Iraq continued to deteriorate in 2006 and have grown more complex despite recent progress in transferring security responsibilities to Iraqi security forces and the Iraqi government. The number of trained and equipped Iraqi security forces has increased from about 174,000 in July 2005 to about 323,000 in December 2006, at the same time as more Iraqi army units have taken the lead for counterinsurgency operations in specific geographic areas. Despite this progress, attacks on coalition forces, Iraqi security forces, and civilians have all increased, reaching record highs in October 2006. Because of the poor security in Iraq, the United States could not draw down U.S. force levels in Iraq as planned in 2004 and 2006, and U.S. forces have continued to conduct combat operations in urban areas, especially Baghdad. Transferring security responsibilities to the Iraqi security forces and provincial governments is a critical part of the U.S. government’s strategy in Iraq and key to allowing a drawdown of U.S. forces. Since 2003, the United States has provided about $15.4 billion to train, equip, and sustain Iraqi security forces and law enforcement. However, it is unclear whether U.S. expenditures and efforts are having their intended effect in developing capable forces and whether additional resources are needed. A key measure of the capabilities of Iraqi forces is the Transition Readiness Assessment (TRA) reports prepared by coalition advisors embedded in Iraqi units. These reports serve as the basis for the Multinational Force- Iraq (MNF-I) determination of when a unit is capable of leading counterinsurgency operations and can assume security responsibilities for a specific area. The TRA reports provide the coalition commander’s professional judgment on an Iraqi unit’s capabilities and are based on ratings in personnel, command and control, equipment, sustainment and logistics, training, and leadership. To conduct future work on this issue, GAO has made multiple requests for full access to the unit-level TRA reports over the last year. However, DOD has not yet complied with our requests. This serves to seriously and inappropriately limit congressional oversight over the progress achieved toward a critical U.S. objective. Since 2003, the United States has provided about $15.4 billion for Iraqi security forces and law enforcement. According to Multinational Security Transition Command-Iraq (MNSTC-I) records, MNF-I has issued about 480,000 weapons, 30,000 vehicles, and 1.65 million pieces of gear (uniforms, body armor, helmets, and footwear), among other items, to the Iraqi security forces as of October 2006. Congress funded the train-and-equip program for Iraq outside traditional security assistance programs, which, according to DOD officials, provided DOD with a large degree of flexibility in managing the program. Since the funding did not go through traditional security assistance programs, the accountability requirements normally applicable to these programs did not necessarily apply, according to DOD officials. It is currently unclear what accountability measures, if any, DOD has chosen to apply to the train-and- equip program for Iraq, as DOD officials have expressed differing opinions on this matter. As part of our ongoing work, we have asked DOD to clarify what accountability measures it has chosen to apply to the program. While it is unclear which regulations DOD has chosen to apply, beginning in early 2004, MNF-I established requirements to control and account for equipment provided to the Iraqi security forces by issuing orders that outlined procedures for its subordinate commands. These included obtaining signed records for equipment received by Iraqi units or individuals and recording weapons serial numbers. Although MNF-I took initial steps to establish property accountability procedures, limitations such as the initial lack of a fully operational equipment distribution network, staffing weaknesses, and the operational demands of equipping the Iraqi forces during war hindered its ability to fully execute critical tasks outlined in the property accountability orders. Since late 2005, MNSTC-I has taken additional steps to improve its property accountability procedures, including establishing property books for equipment issued to Iraqi Ministry of Defense and Ministry of Interior forces. According to MNSTC-I officials, MNSTC-I also recovered existing documentation for equipment previously issued to Iraqi forces. However, according to our preliminary analysis, DOD and MNF-I may not be able to account for Iraqi security forces’ receipt of about 90,000 rifles and about 80,000 pistols that were reported as issued before early October 2005. Thus, DOD and MNF-I may be unable to ensure that Iraqi military forces and police received all of the equipment that the coalition procured or obtained for them. In our ongoing review, we will continue to assess MNF-I records for Iraqi equipment distributed to Iraqi forces. We plan on issuing a final report on these and related intelligence matters by March 2007. Our work focuses on the accountability requirements for the transportation and distribution of U.S.-funded equipment and did not review any requirements relevant to the procurement of this equipment. The U.S. government faces significant challenges in improving the capabilities of Iraq’s central and provincial governments so that they can provide security and deliver services to the Iraqi people. According to State, the Iraqi capacity for self-governance was decimated after nearly 30 years of autocratic rule. In addition, Iraq lacked competent existing Iraqi governmental organizations. Since 2003, the United States has provided the Iraqis with a variety of training and technical assistance to improve their capacity to govern. As of December 2006, we identified more than 50 capacity development efforts led by at least six U.S. agencies. However, it is unclear how these efforts are addressing core needs and Iraqi priorities in the absence of an integrated U.S. plan. Iraq also faces difficulties in spending budgeted funds for capital goods and projects in the security, oil, and electricity sectors. When the Iraqi government assumed control over its finances in 2004, it became responsible for determining how more than $25 billion annually in government revenues would be collected and spent to rebuild the country and operate the government. However, unclear budgeting and procurement rules have affected Iraq’s efforts to spend capital budgets effectively and efficiently. Since most of the U.S. reconstruction funds provided between fiscal years 2003 and 2006 have been obligated, unexpended Iraqi funds represent an important source of additional financing. Iraq had more than $6 billion in unspent capital project funds as of August 2006. For example, Iraq’s Oil Ministry spent only $4 million of $3.6 billion in budgeted funds to repair Iraq’s dilapidated oil infrastructure. The inability to spend this money raises serious questions for the government, which has to demonstrate to citizens who are skeptical that it can improve basic services and make a difference in their daily lives. The U.S. government has launched a series of initiatives in conjunction with other donors to address this issue and improve ministry budget execution. Since September 11, 2001, U.S. military forces have experienced a high pace of operations to support homeland security missions, Operation Enduring Freedom in Afghanistan, and various combat and counterinsurgency operations in Iraq. These operations have required many units and personnel to deploy for multiple tours of duty and, in some cases, to remain for extended tours. DOD faces significant challenges in maintaining readiness for overseas and homeland missions and sustaining rotational deployments of duty, especially if the duration and intensity of current operations continue at the present pace. Ongoing military operations in Iraq are inflicting heavy wear and tear on military equipment. Some equipment items used by U.S. forces are more than 20 years old, and harsh combat and environmental conditions over time have further exacerbated equipment condition problems. The Army and the Marine Corps have initiated programs to reset (repair or replace) equipment and are likely to incur large expenditures in the future. We are currently assessing these programs, including the extent to which the military services are tracking reset costs and the extent to which their reset plans maintain unit equipment readiness while meeting ongoing operational requirements. U.S. ground forces in Iraq have come under frequent and deadly attacks from insurgents using weapons such as improvised explosive devices (IED), mortars, and rocket launchers. IEDs, in particular, have emerged as the number one threat against U.S. forces. Because of the overwhelming size and number of conventional munitions storage sites in Iraq, combined with prewar planning assumptions that proved to be invalid, there were an insufficient number of U.S. and coalition troops on the ground to prevent the widespread looting of those sites. The human, strategic, and financial costs of the failure to provide sufficient troops on the ground have been high, since IEDs made from looted explosives have caused about half of all U.S. combat fatalities and casualties in Iraq and have killed hundreds of Iraqis. In addition, unsecured conventional munitions sites have helped sustain insurgent groups and threatened the achievement of the Operation Iraqi Freedom’s (OIF) strategic goal of creating a stable Iraqi nation. DOD’s actions to date have primarily focused on countering IEDs and not on the security of conventional munitions storage sites as a strategic planning and priority-setting consideration for future operations. Although good first steps, these actions do not address what we believe is a critical OIF lesson learned: If not secured during initial combat operations, an adversary’s conventional munitions storage sites can represent an asymmetric threat to U.S. forces that remain in country. In December 2006, we recommended that the Chairman of the Joint Staff conduct a theaterwide survey and risk assessment regarding unsecured conventional munitions in Iraq and incorporate conventional munitions storage site security as a strategic planning factor into all levels of planning policy and guidance. DOD partially concurred with our recommendations. Efforts to protect U.S. ground forces with increased body and truck armor have been characterized by shortages and delays, which have reduced operational capabilities and forced combat commanders to accept additional risk in completing their missions. We are currently reviewing force protection measures, including body armor, for current operations, as well as the organization and management of the Joint IED Defeat to counter the IED threat. In prior reports, we recommended that the process for identifying and funding urgent wartime requirements be improved and that funding decisions be based on risk and an assessment of the highest priority requirements. More recently, we have recommended actions to ensure that the services make informed and coordinated decisions about materiel solutions developed and procured to address common urgent wartime requirements. DOD generally agreed with these recommendations. DOD has relied extensively on contractors to undertake major reconstruction projects and provide logistical support to its troops in Iraq. Despite making significant investments through reconstruction and logistics support contracts, this investment has not always resulted in the desired outcomes. Many reconstruction projects have fallen short of expectations, and DOD has yet to resolve long-standing challenges in its management and oversight of contractors in deployed locations. These challenges often reflect shortcomings in DOD’s capacity to manage contractor efforts, including having sufficiently focused leadership, guidance, a match between requirements and resources, sound acquisition approaches, and an adequate number of trained contracting and oversight personnel. The challenges encountered in Iraq are emblematic of the systemic issues that DOD faces. In fact, GAO designated DOD’s contract management activities as a high-risk area more than a decade ago and have reported on DOD’s long-standing problems with its management and oversight of support contractors since 1997. For example, because information on the number of contractor employees and the services they provide is not aggregated within DOD or its components, DOD cannot develop a complete picture of the extent to which it relies on contractors to support its operations. DOD recently established an office to address contractor support issues, but the office’s specific roles and responsibilities are still being defined. In assessing acquisition outcomes government-wide over many years, we have applied a framework of sound acquisition practices that recognizes that a prerequisite to having good outcomes is to match well-defined requirements and available resources. Shifts in priorities and funding invariably have a cascading effect on individual contracts. Further, to produce desired outcomes with available funding and within required time frames, DOD and its contractors need to clearly understand DOD’s objectives and needs and how they translate into the contract’s terms and conditions; they need to know the goods or services required, the level of performance or quality desired, the schedule, and the cost. When such requirements were not clear, DOD often entered into contract arrangements that posed additional risks. Managing risks when requirements are in flux requires effective oversight, but DOD lacked the capacity to provide sufficient numbers of contracting, logistics, and other personnel, thereby hindering oversight efforts. With a considerable amount of DOD’s planned construction work remaining and the need for continued logistical support for deployed forces, it is essential to improve DOD’s capacity to manage its contractors if the department is to increase its return on its investment. GAO’s value to the Congress and the American people rests on its ability to demonstrate professional, independent, objective, relevant, and reliable work. To achieve this outcome, we set high standards for ourselves in the conduct of our work. Our core values of accountability, integrity, and reliability describe the nature of our work and, most importantly, the character of our people. In all matters, GAO takes a professional, objective, and nonpartisan approach to its work. GAO’s quality assurance framework is designed to ensure adherence to these principles. The framework is designed around people, processes, and technology and applies to all GAO work conducted under generally accepted government auditing standards. GAO has a multidisciplinary staff of approximately 3,200 accountants, health experts, engineers, lawyers, national security specialists, environmental specialists, economists, historians, social scientists, actuaries, and statisticians. GAO leverages this knowledge by staffing engagements with teams proficient in a number of areas. For example, engagement teams comprise a mix of staff supported by experts in technical disciplines, such as data collection and survey methods, statistics, econometric modeling, information technology, and the law. To add additional value and mitigate risk, GAO has a forensic audits and special investigations team to expose government fraud, waste, and abuse. A key process in our quality assurance framework is providing responsible officials of audited agencies with the opportunity to review and comment on our draft reports. This policy is one of the most effective ways to ensure that a report is fair, complete, and constructive. In April 2005, an international peer review team gave our quality assurance system a clean opinion—only the second time a national audit institution has received such a rating from a multinational team. Thus, the Congress and the American people can have confidence that GAO’s work is independent, objective, and reliable. The team, under the auspices of the Global Working Group of national audit institutions, examined all aspects of GAO’s quality assurance framework. The team found several global “better practices” at GAO that go beyond what is required by government auditing standards. These practices included its strategic planning process, which ensures that GAO focus on the most significant issues facing the country, serious management challenges, and the programs most at risk. The team identified other noteworthy practices: GAO’s audit risk assessment process, which determines the level of product review and executive involvement throughout the audit engagement. GAO’s agency protocols, which provide clearly defined and transparent policies and practices on how GAO will interact with audited agencies. GAO’s use of experts and specialists to provide multidisciplinary audit teams with advice and assistance on methodological and technical issues—vastly expanding GAO’s capacity to apply innovative approaches to the analysis of complex situations. As an organization in constant pursuit of improvement, we benefited from the peer reviewers’ recognition of our quality control procedures as global “better practices” as well as their suggestions on how to strengthen guidance and streamline procedures. Our work highlights the critical challenges that the United States and its allies face in the ongoing struggle to help the Iraqis stabilize, secure, and rebuild their country. Forthright answers to the oversight questions we posed in our report of January 9, 2007, are needed from the U.S. agencies responsible for executing the President’s strategy. Congress and the American people need complete and transparent information on the progress made toward achieving U.S. security, economic, and diplomatic goals in Iraq to reasonably judge our past efforts and determine future directions. For future work, GAO will continue to provide this committee and Congress with independent analysis and evaluations and coordinate our efforts with the accountability community to ensure appropriate oversight of federal programs and spending. Mr. Chairman, this concludes my statement. I would be pleased to answer any questions that you or other members may have at this time. For questions regarding this testimony, please call Joseph A. Christoff at (202) 512-8979. Other key contributors to this statement were Nanette Barton, Donna Byers, David Bruno, Dan Cain, Lynn Cothern, Tim DiNapoli, Mike Ferren, Rich Geiger, Tom Gosling, Whitney Havens, Lisa Helmer, Patrick Hickey, Henry L. Hinton Jr., John Hutton, Steve Lord, Judy McCloskey, Tet Miyabara, Mary Moutsos, Ken Patton, Sharon Pickup, Jason Pogacnik, Jim Reynolds, Donna Rogers, and William Solis. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | GAO provided a strategic overview of GAO's work related to securing, stabilizing, and rebuilding Iraq. In our statement today, as requested, GAO highlighted (1) GAO's scope, authority, and coordination; (2) some of the insights stemming from our work in Iraq; and (3) the rigorous quality assurance framework that GAO uses to ensure relevant, reliable, and consistent results in all of our work. This testimony is based upon extensive work spanning several years. Since 2003, we have issued 67 Iraq-related reports and testimonies. For example, GAO sent a report to the Congress last week on a range of key issues for congressional oversight of efforts to secure, stabilize, and rebuild Iraq. Although many of our sources are classified, we strive to report information to the Congress in a public format to promote greater transparency and accountability of U.S. government policies, programs, and activities. As provided for in our congressional protocols, most of our work in Iraq has been performed under my authority to conduct evaluations on my own initiative since it is a matter of broad interest to the entire Congress and numerous committees in both chambers. Our work also helped inform the deliberations of the Iraq Study Group; the Comptroller General personally briefed this group on the results of our Iraq work in June 2006. GAO also provided significant additional information to the Iraq Study Group for its use. GAO and the Inspectors General (IG) of individual departments and agencies have different roles and responsibilities. GAO's broad audit authority allows us to support Congress through strategic analyses of issues that cut across multiple federal agencies and sources of funding. Our work spans the security, political, and economic prongs of the U.S. national strategy in Iraq. The broad, cross-cutting nature of this work helps minimize the possibility of overlap and duplication by any individual Inspector General. Based on our work, we have made some unique contributions to Congress. Our past and ongoing work has focused on the U.S. strategy and costs of operating in Iraq, training and equipping the Iraqi security forces, governance issues, the readiness of U.S. military forces, and acquisition outcomes. Some highlights from our work follow. Our analysis of the National Strategy for Victory in Iraq recommended that the National Security Council improve the strategy by articulating clearer roles and responsibilities, specifying future contributions, and identifying current costs and future resources. In our examination of the cost of U.S. military operations abroad, we recommended that the Secretary of Defense improve the transparency and reliability of Department of Defense's (DOD) Global War on Terror (GWOT) obligation data. We also recommended that DOD build more funding into the baseline budget once an operation reaches a known level of effort and costs are more predictable. In assessing the capabilities of Iraqi security forces, we found that overall security conditions in Iraq have deteriorated despite increases in the numbers of trained and equipped security forces. A complete assessment of Iraqi security forces' capabilities is dependent on DOD providing GAO with the readiness levels of each Iraqi unit. We found that DOD faces significant challenges in maintaining U.S. military readiness for overseas and homeland missions and in sustaining rotational deployments of duty, especially if the duration and intensity of current operations continue at the present pace. In assessing the impact of ongoing military operations in Iraq on military equipment, we found that the Army and the Marine Corps have initiated programs to reset (repair or replace) equipment and are likely to incur large expenditures in the future. In reviewing efforts to secure munitions sites and provide force protection, we recommended that DOD conduct a theaterwide survey and risk assessment of unsecured conventional munitions in Iraq and incorporate storage site security into strategic planning efforts. In assessing acquisition outcomes, we found that DOD often entered into contract arrangements with unclear requirements, which posed additional risks to the government. DOD also lacked the capacity to provide sufficient numbers of contracting, logistics, and other personnel, thereby hindering oversight efforts. In April 2005, an international peer review team gave our quality assurance system a clean opinion--only the second time a national audit institution has received such a rating from a multinational team. Thus, the Congress and the American people can have confidence that GAO's work is independent, objective, and reliable. |
The FCS concept is designed to be part of the Army’s Future Force, which is intended to transform the Army into a more rapidly deployable and responsive force—one that differs substantially from the large division- centric structure of the past. The Army is reorganizing its current forces into modular brigade combat teams, each of which is expected to be highly survivable and the most lethal brigade-sized unit the Army has ever fielded. The Army expects FCS-equipped brigade combat teams to provide significant warfighting capabilities to DOD’s overall joint military operations. Fundamentally, the FCS concept is to replace mass with superior information—that is, to see and hit the enemy first rather than to rely on heavy armor to withstand a hit. This solution attempts to address a mismatch that has posed a dilemma to the Army for decades: the Army’s heavy forces had the necessary firepower needed to win but required extensive support and too much time to deploy while its light forces could deploy rapidly but lacked firepower. If the Future Force becomes a reality, then the Army would be better organized, staffed, equipped, and trained for prompt and sustained land combat, qualities intended to ensure that the Army would dominate over evolving, sophisticated threats. The Future Force is to be offensively oriented and will employ revolutionary concepts of operations, enabled by new technology. The Army envisions a new way of fighting that depends on networking the force, which involves linking people, platforms, weapons, and sensors seamlessly together in a system- of-systems. In 2006, Congress mandated that the Secretary of Defense conduct a milestone review for the FCS program, following the preliminary design review scheduled for early 2009. Congress stated that the review should include an assessment of (1) whether the requirements are valid and can be best met with the FCS program, (2) whether the FCS program can be developed and produced within existing resources, and (3) whether the program should continue as currently structured, be restructured, or be terminated. The Congress required the Secretary of Defense to review specific aspects of the program, including the maturity of critical technologies, program risks, demonstrations of the FCS concept and software, and a new cost estimate and affordability assessment and to submit a report of the findings and conclusions of the review to Congress. Congressional defense committees have asked GAO on numerous occasions to report and testify on FCS activities. This statement is based on work which was conducted between March 2006 and March 2007 and in accordance with generally accepted government auditing standards. In our March 2007 report, we found that despite the investment of $8 billion already made in the FCS program, it still has significantly less knowledge—and less assurance of success—than required by best practices or DOD policy. By early 2009, enough knowledge should be available about the key elements of the FCS business case to make a well- informed decision on whether and how to proceed with the program. If significant doubts remain regarding the program’s executability, DOD will have to consider alternatives to proceeding with the program as planned. Central to the go/no-go decision will be demonstrable soundness of the FCS business case in the areas of requirements, technology, acquisition strategy, and finances. Our specific findings in the areas of requirements, technologies, acquisition strategy, and finances are summarized below. The Army has made considerable progress in defining system-of-systems level requirements and allocating those requirements to the individual FCS systems. This progress has necessitated significant trade-offs to reconcile requirements and technical feasibility. A key example of this has been the decision to allow a significant increase in manned ground vehicle weight to meet survivability requirements that in turn has forced trade-offs in transportability requirements. The feasibility of FCS requirements still depends on key assumptions about immature technologies, costs, and other performance characteristics like the reliability of the network and other systems. As current assumptions in these areas are replaced with demonstrated performance, more trade-offs are likely. At this point, the Army has identified about 70 high-level risks to be resolved to assure the technical feasibility of requirements. A challenge for the Army in making these trades—which are practical necessities—is determining the cumulative effect of an individual decision on overall requirements. For example, a decision to discontinue a munition technology could result in less lethality, possibly less survivability if our vehicles have to shoot more than once to defeat an enemy, and less responsiveness due to the weight added by carrying more ammunition and fuel. As it proceeds to the preliminary design review and the subsequent go/no- go milestone, the Army faces considerable challenges in completing the definition of technically achievable and affordable system-level requirements, an essential element of a sound business case. The Army will have to complete definition of all system-level requirements and the network as well as the preliminary designs for all systems and subsystems. By the time of the review, it should be able to demonstrate that the FCS will satisfy key performance parameters and the Army’s user community with a program that is as good as or better than what is available with current forces. To do this, the Army will have to mitigate FCS technical risks to significantly lower levels and make demonstrable progress toward meeting key FCS goals including weight reduction, reliability improvement, and average unit production cost reduction. The Army has made progress in the areas of critical technologies, complementary programs, and software development, but it will take several more years to reach the level of maturity needed in 2003. Program officials report that the number of critical technologies they consider as mature has doubled in the past year. While this is good progress by any measure, FCS technologies are far less mature at this point in the program than they should be, and they still have a long way to go to reach full maturity. The Army only sees the need to reach a technology readiness level that requires demonstration of capabilities in a relevant environment by 2011. This does not assure that these capabilities will actually perform as needed in a realistic environment, as required by best practices for a sound business case. We also note that last year, technology maturity levels had been the result of an independent assessment, while the current levels have been determined by the FCS program office. The Army has made some difficult decisions to improve the acquisition strategies for some key complementary programs, such as Joint Tactical Radio System and Warfighter Information Network-Tactical, but they still face significant technological and funding hurdles. Finally, the Army and the LSI are attempting to utilize many software-development best practices and have delivered the initial increments of software on schedule. On the other hand, most of the software development effort lies ahead, and the amount of software code to be written—already an unprecedented undertaking— continues to grow as the demands of the FCS design becomes better understood. The Army and the LSI have recognized several high-risk aspects of that effort and mitigation efforts are underway. As it approaches the preliminary design review and the subsequent go/no- go milestone review, the Army should have made additional progress in developing technologies and software as well as aligning the development of complementary programs with the FCS. The Army faces many challenges, such as demonstrating that critical technologies are mature and having this maturity independently validated. The Army will need to mitigate the recognized technical risks and integrate the technologies with other systems. It will also need to address cost, schedule, and performance risks related to software and mitigate those risks to acceptable levels. Finally, the Army must settle on the set of complementary programs that are essential for FCS success, ensure adequate funding for these systems, and align their schedules with the FCS schedule. The FCS acquisition strategy and testing schedule has become more complex as plans have been made to spin out capabilities to current Army forces. The strategy acquires knowledge later than called for by best practices and DOD policy, although the elongated schedule of about 10 years provides a more realistic assessment of when capabilities can be delivered. Knowledge deficits for requirements and technologies have created enormous challenges for devising an acquisition strategy that can demonstrate the maturity of design and production processes. Even if setting requirements and maturing technologies proceed without incident, FCS design and production maturity are not likely to be demonstrated until after the production decision is made. The critical design review will be held much later on FCS than other programs, and the Army will not be building production-representative prototypes to test before production. The first major test of the network and FCS together with a majority of prototypes will not take place until 2012. Much of the testing up to the 2013 production decision will involve simulations, technology demonstrations, experiments, and single-system testing. Only after that point, however, will substantial testing of the complete brigade combat team and the FCS concept of operations occur. However, production is the most expensive phase in which to resolve design or other problems found during testing. Spin-outs, which are intended to accelerate delivery of FCS capabilities to the current force, also complicate the acquisition strategy by absorbing considerable testing resources. As the Army proceeds to the preliminary design review in 2009, it faces a number of key challenges in the remaining portions of the acquisition strategy. It must complete requirements definition and technology maturity. The spin-out capabilities must be demonstrated before committing to production. System integration must be completed and the Army should be preparing to have released at least 90 percent of the engineering drawings by the time of the critical design review, a best practice. Finally, the program schedule must allocate sufficient time, as needed, to test, fix and retest throughout the FCS test program. Each FCS system, the information network, and the FCS concept should be thoroughly tested and demonstrated before committing to low rate initial production in 2013. In 2006, we reported that FCS program acquisition costs had increased to $160.7 billion—76 percent—since the Army’s original estimate of $91.4 billion (figures adjusted for inflation). While the Army’s current estimate of $163.7 billion is essentially the same, an independent estimate from the Office of the Secretary of Defense puts the acquisition cost of FCS between $203 billion and $234 billion. The comparatively low level of technology and design knowledge at this point in the program portends future cost increases. Our work on a broad base of DOD weapon system programs shows that most developmental cost increases occur after the critical design review, which will be in 2011 for the FCS. Yet, by that point in time, the Army will have spent about 80 percent of the FCS’s development funds. Further, the Army has not yet fully estimated the cost of essential complementary programs and the procurement of spin-out items to the current force. The Army is cognizant of these resource tensions and has adopted measures in an attempt to control FCS costs. However, some of these measures do involve reducing program scope in the form of lower requirements and capabilities, which will have to be reassessed against the user’s demands. Symptomatic of the continuing resource tension, the Army recently announced that it was restructuring several aspects of the FCS program, including reducing the scope of the program and its planned annual production rates to lower annual funding demands. I do want to point out the significance of the financial commitments the Army will make in the next few years. The fiscal year 2008 request includes $99.6 million in FCS procurement funds. Those funds are to procure long lead items for production of (1) non-line-of-sight cannon and other manned ground vehicles, and (2) the initial set of FCS spin-out kits. The fiscal year 2008 request will also fund plant facilitization to support FCS production beginning in fiscal year 2009. Procurement funds rise quickly thereafter, growing from $328.6 million to $1.27 billion to $6.8 billion in fiscal years 2009, 2011, and 2013, respectively. By the time of the preliminary design review and the congressionally mandated go/no-go milestone in 2009, the Army should have more of the knowledge needed to build a better cost estimate for the FCS program. The Army should also have more clarity about the level of funding that may be available to it within the long-term budget projections to fully develop and procure the FCS program of record. Also, by that time, the Army will need to have developed an official Army cost position that reconciles the gap between the Army’s estimates and the independent cost estimate. In the cost estimate, the Army should clearly establish if it includes the complete set and quantities of FCS equipment needed to meet established requirements. Based on this estimate, the Army must ensure that adequate funding exists in its current budget and future years to fully fund the FCS program of record including the development of the complementary systems deemed necessary for the FCS as well as to procure the FCS capabilities planned to be spun out to the current forces. In our March 2007 report, we noted that it was important that specific criteria—as quantifiable as possible and consistent with best practices— be established now to evaluate the sufficiency of program knowledge. We recommended specific criteria that should be included in the Secretary of Defense’s evaluation of the FCS program as part of the go/no-go decision following the preliminary design review in 2009. DOD agreed with this recommendation and noted that the decision will be informed by a number of critical assessments and analyses, but was unspecific as to criteria. We agree that while it is necessary that good information—such as that included in DOD’s response—be presented at the decision, it is also necessary that quantitative criteria that reflect best practices be used to evaluate the information. We also noted that in view of the great technical challenges facing the program, the possibility that FCS may not deliver the right capability must be acknowledged and anticipated. We therefore recommended that the Secretary of Defense analyze alternative courses of action DOD can take to provide the Army with sufficient capabilities, should the FCS be judged as unlikely to deliver needed capabilities in reasonable time frames and within expected funding levels. DOD agreed with this recommendation as well, citing it would rely on ongoing analyses of alternatives. We believe that it is important to keep in mind that it is not necessary to find a rival solution to FCS, but rather the next best solution should the program be judged unable to deliver needed capabilities. The Army recently made a number of key changes to FCS to keep program costs within available funding levels. Core program development and production costs were reduced by deleting or deferring four of the original systems, but these savings were offset by adding funding for spin-outs and ammunition, which had previously not been funded. The program’s cost estimate reflecting the adjustment is now $161.2 billion, a slight decrease from $163.7 billion that we previously reported. Highlights include: Four systems deleted or deferred: the Class II and III unmanned aerial vehicles, the intelligent munitions system, and the armed robotic vehicle. The munitions system will continue outside of FCS, while the robotic vehicle will continue in the science and technology environment. Quantity changes: Class I unmanned aerial vehicle quantities will be cut in half. Quantities of non-line-of-sight launch systems and precision attack missiles were also reduced. The Army will buy eight additional Class IV unmanned aerial vehicles for each brigade combat team. Production rate reduction: Annual FCS production will be reduced from 1.5 to 1 brigade combat team. This change will extend FCS production by about 5 years to 2030. Consolidation of spin-outs: Spin-outs will be reduced from four to three and the content of the spin-outs have changed. The Army has now funded procurement of the spin-outs that had previously been unfunded. Schedule extension: Initial FCS production has been delayed 5 months to February 2013 and initial and full operational capabilities dates have been delayed 6 months to June 2015 and June 2017, respectively. According to Army officials, the Army’s initial assessment found little difference between 14 and 18 systems on the capabilities of the FCS brigade combat team. When the program was approved in 2003, it also had 14 systems. In 2004, when it was restructured, 4 systems were added back in, bringing the total to 18, plus the network. It is not clear how the overall performance of the system can be insensitive to the changes in the composition of the FCS systems. Similarly, we do not yet have an understanding on why FCS production costs have not increased because of the lower production rates and consequent additional years of production. Generally, slowing down the production rate increases costs as the fixed costs of production facilities must be incurred for more years. To achieve the Army’s goals for the FCS program, in 2003 the Army decided to employ a lead systems integrator (LSI) to assist in defining, developing, and integrating FCS. In the past few years, DOD and other agencies have applied the LSI concept in a variety of ways. In the case of the FCS program, the LSI shares program management responsibilities with the Army, including defining the FCS solution (refining requirements), selecting and managing subcontractors, and managing testing. Evaluating the use of the LSI on FCS involves consideration of several intertwined factors, which collectively make the LSI arrangement in the FCS context unique. Some, like the best efforts nature of a cost reimbursable research and development contract, are not unique to the LSI or to FCS. Other factors differ not so much in nature, but in degree from other programs. For example, FCS is not the first system-of-systems program DOD has proposed, but it is arguably the most complex. FCS is not the first program to proceed with immature technologies, but it has more immature technologies than other programs. FCS is not the first program to employ an LSI, but the extent of the partner-like relationship between the Army and the LSI breaks new ground. The Army’s decision to employ a lead systems integrator for the FCS program was framed by two factors: (1) the ambitious goals of the FCS program and (2) the Army’s capacity to manage it. As envisioned in 2003 when the program started, FCS presented a daunting technical and management challenge: the concurrent development of multiple weapon systems whose capabilities would be dependent on an information network also to be developed. All of this was to take place in about 5½ years—much faster than a single weapon system typically takes. Army leaders believed the Army did not have the workforce or flexibility to manage development of FCS on its own within desired timelines. The Army saw its limitations in meeting this challenge as (1) cultural: difficulty in crossing traditional organizational lines; (2) capability: shortage of skills in key areas, such as managing the development of a large information network; and (3) capacity: insufficient resources to staff, manage, and synchronize several separate programs. In addition to the complexity and workforce implications of FCS, the Army saw an opportunity with an LSI to create more incentives for a contractor to give its best effort in development and to create more competition at lower supplier levels. Thus, they employed a contractor—a lead systems integrator–with significant program management responsibilities to help it define and develop FCS and reach across traditional Army mission areas. In May 2003, the Army hired the Boeing Corporation to serve as the LSI for the FCS system development and demonstration phase. Boeing subcontracted with Science Applications International Corporation, another defense contractor, to assist in performing the LSI functions. The relationship between the Army and the LSI is complicated. On the one hand, the LSI plays the traditional role of developing a product for its customer, the Army, and on the other hand the LSI acts like a partner to the Army in ensuring the design, development, and prototype implementation of the FCS network and family of systems. In forging a partner-like relationship with the LSI, the Army sought to gain managerial advantages such as maintaining flexibility to deal with shifting priorities. A partner-like relationship also poses long-term risks for the government. Depending on the closeness of the working relationship, the government’s ability to provide oversight can be reduced compared with an arms-length relationship; more specifically, the government can become increasingly vested in the results of shared decisions and runs the risk of being less able to provide oversight compared with an arms-length relationship, especially when the government is disadvantaged in terms of workforce and skills. In the case of FCS, these risks are present. The Army is more involved in the selection of subcontractors than we have seen on other programs, involvement that can, over time, make the Army somewhat responsible for the LSI’s subcontracting network. On the other hand, the LSI is more involved with influencing the requirements, defining the solution, and testing that solution than we have seen on other programs. This is not to say that the level of involvement or collaboration between the Army and the LSI is inherently improper, but that it may have unintended consequences over the long term. OSD is in a position to provide this oversight, but thus far has largely accepted the program and its changes as defined by the Army, even when they are at wide variance from the best practices embodied in OSD’s own acquisition policies. In 2003, OSD approved the FCS for system development and demonstration prematurely despite the program’s combination of immature technologies and short schedule and then declined to follow through on plans to make a better informed decision 18 months later. OSD has allowed the Army to use its cost estimates rather than OSD’s own independent—and significantly higher—cost estimates and has agreed with the Army’s determination that the bulk of cost increases since 2003 are the result of scope changes and thus do not trigger congressional reporting requirements. In the fiscal year 2007 National Defense Authorization Act, Congress mandated that DOD hold a formal go/no-go decision meeting on the FCS in 2009. DOD has since proposed a serious approach to making that decision, a step that is encouraging from an oversight perspective. The Army has structured the FCS contract consistent with its desire to incentivize development efforts and make it financially rewarding for the LSI to make such efforts. In that regard, the FCS contract pays well. According to an independent estimate from the Office of the Secretary of Defense, the fee payable to the LSI is relatively high based on the value of work it actually performs, and its average employee assigned to the program costs more than a federal executive. The business arrangement between the Army and LSI has been converted from an other transaction agreement to a Federal Acquisition Regulation-based contract. Yet, there remain substantive risks on whether the contract can result in a successful program outcome. As with many cost-reimbursable research and development contracts, the contractor is responsible for putting forth its best effort to ensure a successful FCS. However, if that system fails to meet expectations or requirements despite that effort, the LSI is not responsible. The Army provides incentive payments through nine program events called out in the current contract, for which the LSI must demonstrate progress in setting up and implementing various program processes. By the time the FCS critical design review is completed in 2011, the Army will have paid out over 80 percent of the costs of the LSI contract and the LSI will have had the opportunity to earn more than 80 percent of its total fee. While the Army rationally notes that it is important to use fees to encourage good performance early, the experiences of previous weapon systems shows that most cost growth occurs after the critical design review. Key demonstrations of the actual capabilities of FCS systems will take place after this point. The Army shares responsibility with the LSI for making key decisions and to some extent the Army’s performance affects the performance of the LSI. For example, some of the technologies critical to the FCS are being developed by the Army, not the LSI. If the technologies do not perform as planned, the LSI may not be responsible for the consequent trade-offs in performance. Furthermore, the Army is responsible for all program changes and therefore can adjust its expectations of the LSI according to those changes and the LSI may still earn its full fee. Mr. Chairman, this concludes my prepared statement. I would be happy to answer any questions you or members of the subcommittee may have. For future questions about this statement, please contact me at (202) 512-4841 or [email protected]. Individuals making key contributions to this statement include William R. Graveline, William C. Allbritton, Noah B. Bleicher, Lily J. Chin, Brendan S. Culley, Marcus C. Ferguson, Michael D. O’Neill, Kenneth E. Patton, Thomas P. Twambly, Adam Vodraska, and Carrie R. Wilson. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The Army's Future Combat System (FCS) is a program characterized by bold goals and innovative concepts--transformational capabilities, system-of-systems approach, new technologies, a first-of-a-kind information network, and a total investment cost of more than $200 billion. As such, the FCS program is considered high risk and in need of special oversight and review. Today's testimony is based on work conducted over the past year in response to (1) the National Defense Authorization Act for Fiscal Year 2006, which requires GAO to report annually on the FCS acquisition; and (2) the John Warner National Defense Authorization Act for Fiscal Year 2007, which requires GAO to report on the role of the lead systems integrator in the Army's FCS program. Accordingly, this statement discusses (1) the business case for FCS to be successful and (2) the business arrangements for the FCS program. The Army has far less knowledge about FCS and its potential for success than is needed to fulfill the basic elements of a business case. Those elements are not new to the Army, nor to the Department of Defense (DOD), which addresses such criteria in its weapon system acquisition policy. The Army has made improvements to the program, such as lengthening time frames for demonstrating capabilities and for providing capabilities to current forces. While the Army has also made progress, what it still lacks in knowledge raises doubts about the soundness of the FCS business case. The Army has yet to fully define FCS requirements; FCS technologies that should have been matured in 2003, when the program started, are still immature; key testing to demonstrate FCS performance will not be completed and maturity of design and product will not be demonstrated until after production starts in 2013; and an independent cost estimate from the Office of the Secretary of Defense is between $203 billion and $234 billion, a far higher figure than the Army's cost estimate. To achieve its goals for the FCS program, the Army decided to employ a lead systems integrator (LSI) to assist in defining, developing, and integrating the FCS. This decision reflected the fact that not only were FCS goals ambitious, but also that the Army had limited capacity to manage the undertaking. Boeing Corporation is the LSI. Its relationship with the Army on FCS breaks new ground for collaboration between the government and a contractor. The close working relationship has advantages and disadvantages. An advantage is that such a relationship allows flexibility in responding to shifting priorities. A disadvantage is an increase in risks to the Army's ability to provide oversight over the long term. The contract itself is structured in such a way as to enable the LSI to be paid over 80 percent of its costs and fees by completion of the critical design review in 2011--a point after which programs typically experience most of their cost growth. This is consistent with the Army's desire to provide incentives for the development effort. On the other hand, this contract, as with many cost-reimbursable research and development contracts, makes the contractor responsible for providing its best efforts, but does not assure a successful FCS. The foregoing underscores the important role of the Office of the Secretary of Defense in providing oversight on the FCS program. To date, the Office of the Secretary of Defense has largely accepted the Army's approach to FCS, even though it runs counter to DOD's policy for weapon system acquisition. GAO believes the Office of the Secretary of Defense needs to hold the FCS program accountable to high standards at the congressionally directed decision in 2009 on whether to proceed with FCS. Financial commitments to production will grow rapidly after that point. The Office of the Secretary of Defense should also be mindful of the department-wide implications of the future use of LSIs as well as the system-of-systems approach to developing weapon acquisitions. |
The electricity industry, as shown in figure 1, is composed of four distinct functions: generation, transmission, distribution, and system operations. Once electricity is generated—whether by burning fossil fuels; through nuclear fission; or by harnessing wind, solar, geothermal, or hydro energy—it is generally sent through high-voltage, high-capacity transmission lines to local electricity distributors. Once there, electricity is transformed into a lower voltage and sent through local distribution lines for consumption by industrial plants, businesses, and residential consumers. Because electric energy is generated and consumed almost instantaneously, the operation of an electric power system requires that a system operator constantly balance the generation and consumption of power. Utilities and others own and operate electricity assets, which may include generation plants, transmission lines, distribution lines, and substations— structures often seen in residential and commercial areas that contain technical equipment such as switches and transformers to ensure smooth, safe flow of current and regulate voltage. Utilities may be owned by investors, municipalities, and individuals (as in cooperative utilities). System operators—sometimes affiliated with a particular utility or sometimes independent and responsible for multiple utility areas— manage the electricity flows. These system operators manage and control the generation, transmission, and distribution of electric power using control systems—IT- and network-based systems that monitor and control sensitive processes and physical functions, including opening and closing circuit breakers. As we have previously reported, the effective functioning of the electricity industry is highly dependent on these control systems. Nevertheless, for many years, aspects of the electricity network lacked (1) technologies— such as sensors—to allow system operators to monitor how much electricity was flowing on distribution lines, (2) communications networks to further integrate parts of the electricity grid with control centers, and (3) computerized control devices to automate system management and recovery. As the electricity industry has matured and technology has advanced, utilities have begun taking steps to update the electricity grid—the transmission and distribution systems—by integrating new technologies and additional IT systems and networks. Though utilities have regularly taken such steps in the past, industry and government stakeholders have begun to articulate a broader, more integrated vision for transforming the electricity grid into one that is more reliable and efficient; facilitates alternative forms of generation, including renewable energy; and gives consumers real-time information about fluctuating energy costs. This vision—the smart grid—would increase the use of IT systems and networks and two-way communication to automate actions that system operators formerly had to make manually. Electricity grid modernization is an ongoing process, and initiatives have commonly involved installing advanced metering infrastructure (smart meters) on homes and commercial buildings that enable two-way communication between the utility and customer. Other initiatives include adding “smart” components to provide the system operator with more detailed data on the conditions of the transmission and distribution systems and better tools to observe the overall condition of the grid (referred to as “wide-area situational awareness”). These include advanced, smart switches on the distribution system to reroute electricity around a troubled line and high-resolution, time-synchronized monitors—called phasor measurement units—on the transmission system. The use of smart grid systems may have a number of benefits, including improved reliability with fewer and shorter outages, downward pressure on electricity rates resulting from the ability to shift peak demand, an improved ability to more efficiently use alternative sources of energy, and an improved ability to detect and respond to potential attacks on the grid. Both the federal government and state governments have authority for overseeing the electricity industry. For example, the Federal Energy Regulatory Commission (FERC) regulates rates for wholesale electricity sales and transmission of electricity in interstate commerce. This includes approving whether to allow utilities to recover the costs of investments they make to the transmission system, such as some smart grid investments. Meanwhile, local distribution and retail sales of electricity are generally subject to regulation by state public utility commissions. State and federal authorities also play key roles in overseeing the reliability of the electric grid. State regulators generally have authority to oversee the reliability of the local distribution system. The North American Electric Reliability Corporation (NERC) is the federally designated U.S. Electric Reliability Organization, and is overseen by FERC. NERC has responsibility for conducting reliability assessments and developing and enforcing mandatory standards to ensure the reliability of the bulk power system—i.e., facilities and control systems necessary for operating the transmission network and certain generation facilities needed for reliability. NERC develops reliability standards collaboratively through a deliberative process involving utilities and others in the industry, which are then sent to FERC for approval. These standards include critical infrastructure protection standards for protecting electric utility-critical and cyber-critical assets. FERC has responsibility for reviewing and approving the reliability standards or directing NERC to modify them. In addition, the Energy Independence and Security Act of 2007 established federal policy to support the modernization of the electricity grid and required actions by a number of federal agencies, including the National Institute of Standards and Technology (NIST), FERC, and the Department of Energy. With regard to cybersecurity, the act required NIST and FERC to take the following actions: NIST was to coordinate development of a framework that includes protocols and model standards for information management to achieve interoperability of smart grid devices and systems. As part of its efforts to accomplish this, NIST identified cybersecurity standards for these systems and the need to develop guidelines for organizations such as electric companies on how to securely implement smart grid systems. In January 2011, we reported that NIST had identified 11 standards involving cybersecurity that support smart grid interoperability and had issued the first version of a cybersecurity guideline. In February 2012, NIST issued the 2.0 version of the framework that, according to NIST documents, added 22 standards, specifications, and guidelines to the 75 standards NIST recommended as being applicable to the smart grid in the 1.0 version from January 2010. In September 2014, NIST issued the first revision of the cybersecurity guidelines. FERC was to adopt standards resulting from NIST’s efforts that it deemed necessary to ensure smart grid functionality and interoperability. However, according to FERC officials, the statute did not provide specific additional authority to allow FERC to require utilities or manufacturers of smart grid technologies to follow these standards. As a result, any standards identified and developed through the NIST-led process are voluntary unless regulators use other authorities to indirectly compel utilities and manufacturers to follow them. Like threats affecting other critical infrastructures, threats to the electricity industry and its transmission and distribution systems are evolving and growing and can come from a wide array of sources. Risks to cyber- based assets can originate from unintentional or intentional threats. Unintentional threats can be caused by, among other things, natural disasters, defective computer or network equipment, software coding errors, and careless or poorly trained employees. Intentional threats include both targeted and untargeted attacks from a variety of sources, including criminal groups, hackers, disgruntled insiders, foreign nations engaged in espionage and information warfare, and terrorists. These adversaries vary in terms of their capabilities, willingness to act, and motives, which can include seeking monetary gain or pursuing a political, economic, or military advantage. For example, adversaries possessing sophisticated levels of expertise and significant resources to pursue their objectives—sometimes referred to as “advanced persistent threats”—pose increasing risks. They make use of various techniques— or exploits—that may adversely affect federal information, computers, software, networks, and operations, such as a denial of service, which prevents or impairs the authorized use of networks, systems, or applications. The potential impact of these threats is amplified by the connections between industrial control systems, supervisory control and data acquisition (or SCADA) systems, information systems, the Internet, and other infrastructures, which create opportunities for attackers to disrupt critical services, including electrical power. The increased reliance on IT systems and networks also exposes the electric grid to potential and known cybersecurity vulnerabilities. These include an increased number of entry points and paths that can be exploited; the introduction of new, unknown vulnerabilities resulting from an increased use of new system and network technologies; wider access to systems and networks due to increased connectivity; an increased amount of customer information being collected and transmitted, which creates a tempting target for potential attackers. We and others have also reported that smart grid and related systems have known cyber vulnerabilities. For example, cybersecurity experts have demonstrated that certain smart meters can be successfully attacked, possibly resulting in disruption to the electricity grid. In addition, we have reported that control systems used in industrial settings such as electricity generation have vulnerabilities that could result in serious damages and disruption if exploited. Further, in 2007, the Department of Homeland Security, in cooperation with the Department of Energy, ran a test that demonstrated that a vulnerability commonly referred to as “Aurora” had the potential to allow unauthorized users to remotely control, misuse, and cause damage to a small commercial electric generator. Moreover, in 2008, the Central Intelligence Agency reported that malicious activities against IT systems and networks have caused disruption of electric power capabilities in multiple regions overseas, including a case that resulted in a multicity power outage. In January 2014, the Director of National Intelligence testified that industrial control systems and SCADA systems used in electrical power distribution and other industries provided an enticing target to malicious actors and that, although newer architectures provide flexibility, functionality, and resilience, large segments remain vulnerable to attack, which might cause significant economic or human impact. Further, in 2015 the Director testified that studies asserted that foreign cyber actors were developing means to access industrial control systems remotely, including those that manage critical infrastructures such as electric power grids. As government, private sector, and personal activities continue to move to networked operations, the threat will continue to grow. Cyber incidents continue to affect the electric industry. For example, the Department of Homeland Security’s Industrial Control Systems Cyber Emergency Response Team noted that the number of reported cyber incidents affecting control systems of companies in the electricity subsector increased from 3 in 2009 to 25 in 2011. The response team reported that the energy sector, which includes the electricity subsector, led all others in fiscal year 2014 with 79 reported incidents. Reported incidents affecting the electricity subsector have had a variety of impacts, including hacks into smart meters to steal power, failure in control systems devices requiring power plants to be shut down, and malicious software disabling safety monitoring systems. As we have previously reported, multiple entities have taken steps to help secure the electricity grid, including NERC, NIST, FERC, and the Departments of Homeland Security and Energy. For example, NERC developed critical infrastructure standards for protecting electric utility– critical and cyber-critical assets. These standards established requirements for key cybersecurity-related controls: the identification of critical cyber assets, security management, personnel and training, electronic “security perimeters,” physical security of critical cyber assets, systems security management, incident reporting and response planning, and recovery plans for critical cyber assets. In December 2011 we reported that NERC’s cybersecurity standards, along with supplementary guidance, were substantially similar to NIST guidance applicable at the time to federal agencies. NERC had also published security guidelines for companies to consider for protecting electric infrastructure systems, although these guidelines were voluntary and typically not checked for compliance. For example, some of this guidance was intended to assist entities in identifying and developing a list of critical cyber assets. As of October 2015, NERC listed about 30 critical infrastructure protection standards for cybersecurity, some of which were subject to enforcement, some which were subject to future enforcement, and some which were pending regulatory filing or approval. NERC also had enforced compliance with mandatory cybersecurity standards through its Compliance Monitoring and Enforcement Program, including assessing monetary penalties for violations. NIST, in accordance with its responsibilities under the Energy Independence and Security Act of 2007, has identified cybersecurity standards for smart grid systems. Specifically, in August 2010 NIST had identified 11 such standards and issued the first version of a cybersecurity guideline. As we reported in January 2011, NIST’s guidelines largely addressed key cybersecurity elements, with the exception of the risk of attacks using both cyber and physical means—an element essential to securing smart grid systems. We recommended that NIST finalize its plan and schedule for incorporating the missing elements into its guidelines. In 2014, NIST issued updated guidelines, which address the relationship of smart grid cybersecurity to cyber-physical attacks and cybersecurity testing and certification. In addition, the updated guidelines describe the relationship of smart grid cybersecurity to NIST’s cybersecurity framework that was issued in February 2014. This framework, which was developed in accordance with Executive Order 13636, is to enable organizations—regardless of size, degree of cybersecurity risk, or cybersecurity sophistication—to apply the principles and best practices of risk management to improving the cybersecurity and resilience of critical infrastructure. FERC had also taken several actions, including reviewing and approving NERC’s critical infrastructure protection standards in 2008. It had also directed NERC to make changes to the standards to improve cybersecurity protections. However, in 2012 the FERC Chairman stated that many of the outstanding directives had not been incorporated into the standards. We also noted in our January 2011 report that FERC had begun reviewing smart grid standards identified by NIST, but declined to adopt them due to insufficient consensus. The Department of Homeland Security, in its capacity as the lead federal agency for cyber-critical infrastructure protection, had issued recommended practices to reduce risks to industrial control systems in critical infrastructure sectors, including the electricity subsector. The department has also provided on-site support to respond to and analyze security incidents and shared actionable intelligence, vulnerability information, and threat analysis with companies in the electricity subsector. In addition, the department, in accordance with Executive Order 13636, established a program to promote the adoption of the NIST cybersecurity framework. As the lead agency responsible for critical infrastructure protection efforts in the energy sector, the Department of Energy, as we reported in December 2011, was involved in efforts to assist the electricity subsector in the development, assessment, and sharing of cybersecurity standards, according to department officials. In addition, the department has created sector-specific guidance to assist the sector in implementing the NIST cybersecurity framework. The guidance includes sections that explain framework concepts for its application, identify example resources that may support framework use, provide a general approach to framework implementation, and identify an example of a tool-specific approach to implementing the framework. In our January 2011 report we identified a number of key challenges that industry and government stakeholders faced in securing the systems and networks supporting the electricity grid. Monitoring implementation of cybersecurity standards. Best practices for information security call for monitoring the extent to which security controls have been implemented. In our report, we noted that FERC had not developed an approach coordinated with other regulators to monitor, at a high level, the extent to which industry follows the voluntary smart grid standards it adopts. We recommended that FERC, in coordination with state regulators and groups that represent utilities subject to less FERC and state regulation, periodically evaluate the extent to which utilities and manufacturers are following voluntary interoperability and cybersecurity standards and develop strategies for addressing any gaps in compliance with standards that are identified as a result of this evaluation. However, FERC has not implemented this recommendation. While FERC reported that it has taken steps to collaborate with stakeholders, it has not taken steps to determine the extent to which the voluntary standards have been integrated into products or whether they are effective. Monitoring such efforts would help FERC and other regulators know if their approach to standards setting is effective or if changes are needed. Clarifying regulatory responsibilities. Experts we spoke with during the course of our review in 2011 expressed concern that there was a lack of clarity about the division of responsibility between federal and state regulators, particularly regarding cybersecurity. While jurisdictional responsibility has historically been determined by whether a technology is located on the transmission or distribution system, experts raised concerns that smart grid technology may blur these lines because, for example, devices deployed on parts of the grid traditionally subject to state jurisdiction could, in the aggregate, affect the reliability of the transmission system, which falls under federal jurisdiction. Experts also noted concern about the ability of regulatory bodies to respond quickly to evolving cybersecurity threats. Clarifying these responsibilities could help improve the effectiveness of efforts to protect smart grid technology from cyber threats. Taking a comprehensive approach to cybersecurity. To secure their systems and information, entities should adopt an integrated, organization-wide program for managing information security risk. Such an approach helps ensure that risk management decisions are aligned strategically with the organization’s mission and security controls are effectively implemented. However, as we reported in 2011, experts told us that the existing federal and state regulatory environment had created a culture within the utility industry of focusing on compliance with regulatory requirements instead of one focused on achieving comprehensive and effective cybersecurity. By taking such a comprehensive approach, utilities could better mitigate cybersecurity risk. Ensuring that smart grid systems have built-in security features. Information systems should be securely configured, including having the ability to record events that take place on networks to allow for detecting and analyzing potential attacks. Nonetheless, experts told us that certain currently available smart meters had not been designed with a strong security architecture and lacked important security features, such as event logging. By ensuring that smart grid systems are securely designed, utilities could enhance their ability to detect and analyze attacks, reducing the risk that attacks will succeed and helping to prevent them from recurring. Effectively sharing cybersecurity information. Information sharing is a key element in the model established by federal policy for protecting critical infrastructure. However, the electric industry lacked an effective mechanism to disclose information about cybersecurity vulnerabilities, incidents, threats, lessons learned, and best practices. For example, experts we spoke with stated that while the industry had an information-sharing center, it did not fully address these information needs. Establishing quality processes for information sharing will help provide utilities with the information needed to adequately protect cyber assets against attackers. Establishing metrics for evaluating cybersecurity. Metrics are important for comparing the effectiveness of competing cybersecurity solutions and determining what mix of solutions will make the most secure system. The electric industry, however, was challenged by a lack of cybersecurity metrics, making it difficult to determine the extent to which investments in cybersecurity improve the security of smart grid systems. Developing such metrics could provide utilities with key information for making informed and cost-effective decisions on cybersecurity investments. In our January 2011 report, we recommended that FERC, working with NERC as appropriate, assess whether any cybersecurity challenges identified in our report should be addressed in commission cybersecurity efforts. Since that time, FERC took the following actions. First, in 2011, it began evaluating whether cybersecurity challenges, including those identified in our report, should be addressed under the agency’s existing cyber security authority and efforts. As a part of this effort, the commission directed NERC to revise the electricity industry’s critical infrastructure protection (CIP) standards with the aim of addressing, among other things, cybersecurity challenges identified in our report. In November 2013, NERC issued updated CIP standards to address these and other cybersecurity challenges. Second, the commission held a technical conference in 2011 in which it solicited feedback from industry stakeholders to help inform the agency’s cybersecurity efforts. Third, in September 2012, the commission established an Office of Energy Infrastructure Security, which is to, among other things, help mitigate cybersecurity threats to electricity industry facilities, and to improve cybersecurity information sharing. In summary, as they become increasingly reliant on computerized technologies, the electricity industry’s systems and networks are susceptible to an evolving array of cyber-based threats. Key entities, including NERC and FERC, are critical to approving and disseminating cybersecurity guidance and standards, while NIST, DHS, and the Department of Energy have additional roles to play in providing guidance and providing other forms of support for protecting the sector against cyber threats. Moreover, without monitoring the implementation of voluntary cybersecurity standards in the industry, FERC does not know the extent to which such standards have been adopted or whether they are effective. Given the increasing use of information and communications technology in the electricity subsector and the evolving nature of cyber threats, continued attention can help mitigate the risk these threats pose to the electricity grid. Chairman Weber, Chairwoman Comstock, Ranking Members Grayson and Lipinski, and Members of the Subcommittees, this concludes my prepared statement. I would be happy to answer any questions you may have at this time. If you or your staffs have any questions about this statement, please contact Gregory C. Wilshusen at (202) 512-6244 or [email protected]. Other staff who contributed to this statement include Franklin J. Rusco, Director; Michael W. Gilmore; Bradley W. Becker; Kenneth A. Johnson; Jon R. Ludwigson; Lee McCracken; Jonathan Wall; and Jeffrey W. Woodward. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The electric power industry—including transmission and distribution systems—increasingly uses information and communications technology systems to automate actions with the aim of improving the electric grid's reliability and efficiency. However, these “smart grid” technologies may be vulnerable to cyber-based attacks and other threats that could disrupt the nation's electricity infrastructure. Several federal entities have responsibilities for overseeing and helping to secure the electricity grid. Because of the proliferation of cyber threats, since 2003 GAO has designated protecting the systems supporting U.S. critical infrastructure (which includes the electricity grid) as a high-risk area. GAO was asked to provide a statement on opportunities to improve cybersecurity for the electricity grid. In preparing this statement, GAO relied on previous work on efforts to address cybersecurity of the electric sector. GAO reported in 2011 that several entities—the North American Electric Reliability Corporation (NERC), the National Institute of Standards and Technology (NIST), the Federal Energy Regulatory Commission (FERC), the Department of Homeland Security (DHS), and the Department of Energy (DOE)—had taken steps to help secure the electric grid. These included developing cybersecurity standards and other guidance to reduce risks. While these were important efforts, GAO at that time also identified a number of challenges to securing the electricity grid against cyber threats: Monitoring implementation of cybersecurity standards : GAO found that FERC had not developed an approach, coordinated with other regulatory entities, to monitor the extent to which the electricity industry was following voluntary smart grid standards, including cybersecurity standards. Clarifying regulatory responsibilities: The nature of smart grid technology can blur traditional lines between the portions of the grid that are subject to federal or state regulation. In addition, regulators may be challenged in responding quickly to evolving cybersecurity threats. Taking a comprehensive approach to cybersecurity: Entities in the electricity industry (e.g., utilities) often focused on complying with regulations rather than taking a holistic and effective approach to cybersecurity. Ensuring that smart grid systems have built-in security features: Smart grid devices (e.g., meters) did not always have key security features such as the ability to record activity on systems or networks, which is important for detecting and analyzing attacks. Effectively sharing cybersecurity information: The electricity industry did not have a forum for effectively sharing information on cybersecurity vulnerabilities, incidents, threats, and best practices. Establishing cybersecurity metrics: The electricity industry lacked sufficient metrics for determining the extent to which investments in cybersecurity improved the security of smart grid systems. Since 2011, additional efforts have been taken to improve cybersecurity in the sector. For example, in 2013, NERC issued updated standards to address these and other cybersecurity challenges. NIST also updated its smart grid cybersecurity standards in 2014. It has also developed a cybersecurity framework for critical infrastructure, and DHS and DOE have efforts under way to promote its adoption. In addition, FERC assessed whether these and other challenges should be addressed in its ongoing cybersecurity efforts. However, FERC did not coordinate with other regulators to identify strategies for monitoring compliance with voluntary cybersecurity standards in the industry, as GAO had recommended. As a result, FERC does not know the extent to which such standards have been adopted or whether they are effective. Given the increasing use of information and communications technology in the electricity subsector and the evolving nature of cyber threats, continued attention can help mitigate the risk these threats pose to the electricity grid. In its 2011 report, GAO recommended that (1) NIST improve its cybersecurity standards, (2) FERC assess whether challenges identified by GAO should be addressed in ongoing cybersecurity efforts, and (3) FERC coordinate with other regulators to identify strategies for monitoring compliance with voluntary standards. The agencies agreed with the recommendations, but FERC has not taken steps to monitor compliance with voluntary standards. |
Carbon dioxide is by far the most prevalent greenhouse gas emitted in the United States, as shown in table 1. The other principal greenhouse gases, in order of percentage of emissions in 2003, are methane, nitrous oxide, and three types of synthetic gases—hydrofluorocarbons (HFCs), perfluorocarbons (PFCs), and sulfur hexafluoride (SF). In response to the May 1992 United Nations Framework Convention on Climate Change, the United States developed the Climate Change Action Plan aimed at reducing domestic greenhouse gas emissions. As a part of this plan, programs were developed during the 1990s to provide information and tools to encourage participants to voluntarily undertake changes to reduce their emissions of carbon dioxide, methane, and other greenhouse gases. The intent of programs such as Energy STAR is to help organizations improve energy efficiency, thereby helping to reduce emissions. Other programs, such as the Coalbed Methane Outreach Program, encourage emissions reductions in other greenhouse gases, such as methane. The amount of energy used to generate each dollar of national output has declined over time. The ratio of energy used to economic output is called energy intensity. According to the Energy Information Administration (EIA), the independent statistical and analytical agency within DOE, energy intensity declined between 1990 and 2003, at an average rate of 1.8 percent per year. The rate of decline was the result of, among other things, energy efficiency improvements in industrial and transportation equipment and in commercial and residential lighting, heating, and refrigeration technologies. In early 2006, EIA projected that energy intensity will decline at an average annual rate of 1.8 percent between 2005 and 2025. The U.S. economy has also become more efficient in terms of emissions intensity. (According to EIA, energy and emissions intensity are closely related because energy-related carbon dioxide emissions make up more than 80 percent of total U.S. greenhouse gas emissions.) U.S. emissions intensity declined between 1990 and 2003 at a rate of 1.9 percent a year. The reasons for the decline include general improvements in energy efficiency and a long-term shift toward a service economy. Other reasons include greater use of nuclear power, development of renewable resources, substitution of less emissions-intensive natural gas for coal and oil, and the use of transportation fuels with biogenic components, such as ethanol. EIA projected in early 2006 that between 2005 and 2025, emissions intensity will decline at a rate of 1.7 percent per year (see fig. 1). The goal of the President’s 2002 initiative was to reduce the emissions intensity of the U.S. economy by 18 percent between 2002 and 2012, a reduction 4 percentage points greater than would be expected absent any new policy. In particular, according to EIA projections cited by the administration, without the initiative, emissions would increase from 1,917 million metric tons of carbon equivalent (MMTCE) in 2002 to 2,279 MMTCE in 2012. Under the initiative, emissions will increase to 2,173 MMTCE in 2012, which is 106 MMTCE less than otherwise expected. In 2002, EIA projected that U.S. emissions intensity would decline (improve) by 14 percent between 2002 and 2012 without any new policy. In 2006, EIA updated its estimate, projecting a decline in emissions intensity of 17 percent between 2002 and 2012. According to EIA, further reductions in emissions intensity are projected to result from, among other things, increasing energy prices that will tend to reduce energy consumption growth below prior estimates. Nevertheless, according to this estimate, total greenhouse gas emissions will continue to rise. Specifically, EIA projected in 2006 that total emissions would increase by 14.2 percent between 2002 and 2012. The President’s 2002 initiative comprised about 30 elements. In addition to challenging businesses and industry to voluntarily reduce emissions, it included tax incentives for renewable energy and conservation, transportation programs, and other efforts. Climate Leaders and Climate VISION are two of the federal government’s newest voluntary climate programs. According to a DOE official, they are the only federal programs that ask potential members for an emissions or emissions intensity reduction goal in order to participate. According to EPA, for firms that are already participating in other EPA voluntary programs, Climate Leaders can serve as a coordinating umbrella to comprehensively manage their voluntary climate change activities. According to EPA officials, all program participants agree to complete four program steps, and EPA guidelines suggest that these steps generally be completed within about 1 year, although the goal negotiation process can take as long as 2 years. The first step is to prepare a greenhouse gas emissions inventory; the second step is to prepare an inventory management plan (IMP); the third step is to enter into negotiations with EPA regarding a goal; and the fourth step is to report annually. (However, EPA does not insist that firms perform all four steps in that order). Overall, we found that some firms were taking longer to complete these steps and that EPA has no written policy for dealing with such firms. According to DOE officials, all program participants agree to complete two program steps: the first within about 1 year of joining the program, and the second after they have finished training their members in the use of reporting protocols, most in 2006. Overall, we found that some groups had not completed the first step within the specified time frame. EPA has started to develop a system for tracking participants’ progress; DOE does not yet have such a system. Neither agency has written criteria detailing expected time frames for meeting expectations or the consequences of not meeting expectations. First, firms complete their base-year inventories, which EPA encourages and expects them to do, on average, within 1 year of joining the program. The base-year inventory contains the data that will be used to measure firms’ progress toward their goals. As of November 2005, 61 of the 74 firms had submitted base-year inventory data to EPA. After the inventory has been submitted, the participant works with EPA to refine its inventory. Eleven of the 61 inventories had been finalized and approved by EPA. The other 50 were still in development or review. An EPA official noted that some firms did not submit inventories earlier because EPA’s reporting guidelines were not completed until April 2004. In addition, these officials told us that it often takes firms more than a year to prepare their base-year inventory because firms start at different levels of sophistication with respect to developing an inventory. Some firms start with no knowledge of how to develop an inventory and no infrastructure in place for doing so. Furthermore, some corporate inventories may take longer due to their complexity, including complicated corporate structures, a wide variety of emissions sources, and the lack of available emissions data. Corporate reorganizations and staff turnover also contribute to delays. An EPA official told us that the average amount of time it takes firms to complete their base-year inventory once they join the program has been 2 years, but the average amount of time firms have taken since EPA completed its reporting guidelines is 1 year. Firms have two options for having their inventories reviewed. They can either submit their data to EPA for review, or they can choose third-party verification, in which an outside organization, such as an environmental engineering firm with greenhouse gas verification experience, reviews their data. After they have submitted base-year inventory data to EPA, firms work with EPA to refine the inventory, usually resulting in some revisions. In reporting data, firms are to follow guidance developed by EPA that is based on a standardized reporting protocol established by the World Resources Institute and the World Business Council for Sustainable Development. The protocol consists of corporate emissions accounting standards developed by representatives from industry, government, and nongovernmental organizations. Second, EPA officials told us that EPA expects all firms to prepare an IMP, which is the firm’s plan for collecting data, preparing the inventory, and managing inventory quality. EPA officials informed us that, as of November 2005, 60 of the 74 firms had submitted draft IMPs. Firms that choose to have EPA review their emissions inventories must also submit their IMP to EPA, while firms that choose to undergo third-party verification must submit a letter from the third party stating that all the specified components of the IMP checklist are in place and that at least one site visit was conducted. The IMP checklist consists of 30 components in seven major categories, including, among other things, boundary conditions (i.e., which parts of the facility will be covered under the program), emissions quantification methods, and data management processes. Nineteen of the 30 IMP components are to be in place within 1 year of joining the program and must be in place for base-year reporting to be finalized. Fifty-four of the 60 firms completing IMPs submitted their IMPs to EPA for review, while the other 6 chose to have their inventories and IMPs reviewed by third parties. According to EPA officials, the remaining 14 firms had not submitted a draft IMP or informed EPA of their intention to choose third-party verification, although eight of these firms joined the program within the past year and so, according to EPA officials, would not be expected to have completed these steps. EPA officials told us that these remaining firms are still working on the necessary documentation. EPA conducts at least one site visit per firm to review facility-level implementation of the IMP to determine whether there are ways to improve the plan’s accuracy, among other things. The site to be visited is mutually agreed upon; EPA aims to review the company facility with the highest overall risk to the accuracy of reported emissions. (Such a site should be a large emitter, have many of the largest emission types, and represent the firm’s most common business activity, among other criteria.) As of November 2005, EPA had conducted 25 site visits (about one-third of all firms), with 10 more visits scheduled before the end of 2005. The base-year inventory is not considered final until EPA has reviewed both it and the IMP and conducted a site visit. An EPA official told us that initial inventories generally contain about 95 percent of each member’s total emissions, so only minor and incremental revisions are needed at the on-site review stage. EPA provides up to 80 hours of technical assistance to help each firm complete its base-year inventory and develop and document its IMP. Technical assistance can include implementing greenhouse gas accounting methods as well as measuring, tracking, and reporting emissions. After the firm’s base-year inventory is complete, EPA experts continue to offer up to 10 hours annually of technical assistance during subsequent years. Since Climate Leaders provides technical assistance to each firm as it develops and documents its inventory and IMP, an EPA official stated that most major issues that might arise in inventory design and development are addressed informally at the technical assistance stage. However, according to EPA, some issues are identified during the site visits. In general, the site visits have identified only a few areas where EPA asked for revisions. These usually involved missing small sources of on-site emissions (such as those from propane for forklifts or on-site diesel purchases for a yard truck). EPA officials told us that most of the items they identified during the site visits were minor calculation errors or ways to improve the firm’s data quality assurance and quality control processes. They said that the majority of these areas are corrected on location during the site visit, and any others are verified by the submission of an updated IMP and greenhouse gas reporting form that describe respectively, the changes to the inventory process and the greenhouse gas emissions that were made in response to the findings. As noted earlier, firms choosing third-party verification instead of EPA review are to submit an independent verifier’s report stating that at least one site visit was conducted and that all the necessary components of the IMP checklist were successfully implemented. As of November 2005, six firms had chosen to have their data verified by a third party, and all of these firms had undergone their third-party verification. Three firms had submitted inventory data and initial auditor reports to EPA. EPA is awaiting letters from the other three firms indicating that all of the components of the IMP checklist are in place and that any corrective actions identified in the verification process have been addressed. Third, EPA officials told us that the agency expects firms to enter into negotiations with EPA to set their reduction goals once their base-year inventory is finalized, generally within about 1 year after joining the program, and to complete negotiations within 1 year after that. However, we found that some firms have taken longer to do so. Thirty-eight of the 74 participating firms had set goals as of November 2005. Of the 36 firms without goals, 20 were working with EPA to develop goals. Seven of these 20 firms were still working on their base-year inventories, and 9 had joined the program within the past year and hence would not be expected to have set goals. The 36 firms without goals included 18 firms that joined the program in 2002 or 2003. Specifically, of the 35 firms that joined in 2002, the program’s first year, 22 had set goals, 9 firms were in the process of negotiating their goals with EPA, and 4 more had not begun such negotiations. Of the 16 firms that joined in 2003, 11 had set goals, 3 were in negotiation with EPA regarding goals, and 2 had not yet begun such negotiations. According to EPA officials, the 6 firms had not begun negotiations because their base-year inventories were not finalized. In describing why it may take a long time to set goals, EPA officials told us that many firms require considerable time to develop their inventories, which can be complex. Firms must also obtain internal approval of their emissions reduction goals from their senior management, and some firms lack enough resources to devote to inventory development to meet the time frame of EPA’s reporting guidelines. Other reasons also exist. For example, one firm disagreed with EPA regarding whether to report a certain type of emission in its inventory and needed to come to agreement with EPA on addressing those emissions. Another firm is involved in litigation that will likely affect its future emissions levels and does not want to set an emissions reduction goal until the case is resolved, while yet a third firm is facing regulation that could affect its ability to meet an aggressive reduction goal. Finally, according to EPA’s reporting guidelines, all firms agree to report to EPA annually on their emissions using EPA’s Annual Greenhouse Gas Inventory Summary and Goal Tracking Form. This form describes the firm’s emissions at a corporate level broken out by emissions type for both domestic and international sources and details progress toward the firm’s emissions reduction goal. As of November 2005, 10 of the 11 firms with finalized inventories had submitted annual data through 2004 to EPA. An EPA official told us that the other firm was currently resolving some outstanding issues and would likely submit a report in early 2006. Although all firms are expected to complete all four steps listed above, EPA officials told us that firms do not need to complete the steps in any particular order. For example, some firms may choose to finalize their base-year inventory before submitting annual reports with multiple years of data, while other firms may choose to submit annual data before the inventory is fully finalized. EPA officials told us that they had started to develop a database to track firms’ progress and are currently in the process of entering and validating the data. Although some firms are not completing the various program steps as quickly as EPA expected, the agency has not yet established a written policy for dealing with such firms. An EPA official noted that firms that voluntarily agree to participate in the program are aware of program expectations and are generally proactive in meeting them. EPA officials further stated that the agency has three options for dealing with firms that do not appear to be proceeding in a timely manner: (1) telephone calls from EPA or its contractor to reinvigorate the process, (2) a letter to firms urging them to act more expeditiously, or (3) removal from the program if the firm is not putting forth a good-faith effort to meet the program’s expectations. However, EPA believes that it is better for the environment to work with firms that are making a good-faith effort to implement appropriate management systems than to remove them from the program. To date, EPA has not removed any firm from the program for lack of progress, although one firm voluntarily left after realizing it did not have sufficient resources to continue participation. According to EPA officials, as of November 2005, two firms did not appear to be working toward completing their reporting duties in a timely manner, and EPA anticipated sending letters to those firms. EPA officials noted that, since Climate Leaders is a voluntary program, it is difficult for EPA to sanction firms that do not meet all of the program’s expectations in a timely manner. These officials said that, although they do not currently have a written policy on how to deal with firms that are not progressing as expected, including specific standards for time frames and consequences, they expect to begin developing such a policy in the near future. DOE has defined two program steps that it expects participating trade groups to complete: developing a work plan and reporting emissions data. According to agency officials, after establishing its goal to reduce emissions, each industry group is asked to develop a work plan following a standard template developed by DOE, generally within 1 year of joining the program. The template includes four items: (1) emissions measurement and reporting protocols; (2) plans to identify and implement near-term, cost-effective opportunities; (3) development of cross-sector projects for reducing greenhouse gas emissions intensity; and (4) plans to accelerate research and development and commercialization of advanced technology. However, DOE officials explained that specific elements of each industry group’s work plans are different because each industry is different. The work plans are intended to help ensure that the trade groups’ goals and activities are significant, clearly understood by the public, and aimed at producing results in a time frame specified by the group. Preparing the work plan is a collaborative process between the trade groups and program officials. Each work plan is reviewed three times by (1) a representative of the federal agency having the lead for that industry (e.g., DOE for the American Chemistry Council, and DOE and the Department of Agriculture for the American Forest & Paper Association); (2) Climate VISION program staff; and (3) a DOE contractor to ensure that the plan provides a suite of activities that will enable the group to meet its reduction goal. DOE officials told us that all work plans completed to date were subjected to at least one round of revisions before being finalized and posted to the program’s Web site. According to DOE officials, as of November 2005, 11 of the 15 trade groups had completed their work plans. Of the four groups that had not completed their work plans, two were new members, joining Climate VISION in 2005; the other two—the Association of American Railroads and the National Mining Association—were original members, joining in 2003. DOE officials said they were still working with the groups to finalize their work plans. They also noted that getting the trade groups to adhere to DOE’s time lines can be challenging because the groups often have to clear all their activities through their individual member companies or through their boards of directors, which can be time consuming. In addition to developing a work plan, trade groups are expected to report data on their greenhouse gas emissions. As of November 2005, 5 of the 15 groups had reported data: 2 groups reported data to DOE, and 3 groups that have been working with EPA as participants in EPA-sponsored programs reported to that agency. According to a DOE official, as the trade groups finish developing and training their members in the use of reporting protocols, they are expected to begin reporting on their emissions, most in 2006. DOE will then ask the groups to report annually. Program officials explained that, at least in one case, a group did not report earlier because, among other things, DOE was revising its interim final voluntary emissions reporting guidelines, which were released in late 2005. DOE does not specify a particular format that trade groups should use in reporting emissions data, since all industries are different and the nature of the goals differ. However, the program encourages the groups to have their individual members report using EIA’s Voluntary Reporting of Greenhouse Gases program or another appropriate reporting system, such as EPA’s. Trade groups have developed or are developing reporting protocols as part of their work plans. DOE officials told us that once they receive data from the trade groups, they would arrange for a contractor to review these data and check them against EIA or EPA data for the reporting industry’s sector for accuracy. The officials also told us they would post trade groups’ emissions reports on DOE’s Web site to provide transparency, thereby providing an incentive for groups to report accurate information. An industry may also choose on its own to hire an independent expert to review reports for accuracy. For example, the American Chemistry Council has required third-party certification of each of its member companies’ environmental, health, and safety and security management systems, including the program under which members measure and report greenhouse gas emissions. Program officials told us that they do not have a system for tracking participants’ actions, including completing work plans, reporting, and the other steps identified in its work plan, but they said a contractor is working to establish a reporting system for 2006. The officials also said that DOE would remove trade groups from the program if they did not appear to be taking actions to complete program steps, but DOE has not yet established any deadline by which groups’ emission reports must be submitted. However, the officials stated that they are currently working on setting such a deadline. The officials said that they do not believe it will be necessary to remove groups, since the groups are very enthusiastic about the program and understand the political stakes involved. Therefore, these officials expressed confidence that the groups will meet DOE’s expectations to the best of their abilities. EPA worked with firms to set emissions-related goals, and more than half of the firms participating in Climate Leaders have set goals for reducing their emissions or improving their emissions intensity. The firms’ goals vary in terms of the metric used, their geographic scope, and the time period covered. DOE or another federal agency conducted discussions with the industry groups on establishing their goals, and all participating groups had established a goal before joining Climate VISION. The participants’ goals varied in terms of the type of goal (emissions, emissions intensity, or energy efficiency) and the period covered by the goal (start and end dates.) Finally, many groups qualified their goals based upon their stated need for reciprocal federal actions, such as tax incentives or regulatory relief. EPA works with all firms to set goals and offers flexibility in goal-setting, since each firm has a unique set of emissions sources and reduction opportunities. First, as discussed earlier, EPA works with firms to develop inventories and IMPs to document their base-year emissions. Second, EPA creates an industry standard, or benchmark, against which to evaluate each firm’s goal. EPA uses a suite of modeling tools and statistical tables to develop the benchmark for each industry sector. The firm’s goal is evaluated against a projected emissions improvement rate for its sector; EPA expects every firm’s goal to be markedly better than the projected benchmark for the firm’s sector. EPA also checks each firm’s reported emissions data over the goal period to ensure that the firms are not reducing emissions simply by shrinking their size or by outsourcing. EPA encourages each firm to set a goal that is aggressive but that also considers company and sectoral variations. Nonetheless, each goal must be (1) entitywide (including at least all U.S. operations), (2) based on the most recent base year for which data are available, (3) achieved over 5 to 10 years, (4) expressed as an absolute emissions reduction or as a decrease in emissions intensity, and (5) aggressive compared with the projected greenhouse gas emissions performance for the firm’s industry. As of November 2005, 38 of the program’s 74 firms had set emissions or emissions intensity reductions goals. The remaining 36 firms were working with EPA to set goals. The firms’ goals vary in terms of three characteristics: (1) the metric used (absolute emissions or emissions intensity), (2) the geographic scope of the goal (reductions at U.S. or worldwide facilities), and (3) the time frame in which the reductions will occur. First, 19 firms pledged to reduce total emissions, while 18 pledged to reduce emissions intensity, and 1 pledged to reduce both total emissions and emissions intensity. Of the 19 companies with intensity goals, 15 measured emissions intensity in terms of their physical units of output (such as tons of cement or barrels of beer produced), while the other 4 firms measured emissions intensity in financial terms (such as dollar of revenue.) In addition, EPA expects that many firms that meet their intensity goals will also achieve absolute emissions reductions. In fact, EPA projected that four of the five firms that were expected to reach their goals in 2005 would also achieve absolute emissions reductions, even though only one of them has an absolute target. Second, 29 of the 38 companies established goals relating to their U.S. or North American facilities only, while the other 9 established goals relating to their global facilities. Third, the time periods covered ranged from 5 to 10 years, and all goal periods began in 2000 or later because EPA asked firms to use the most recent data available when establishing the base year for their goal. EPA did this to prevent firms from counting reductions made prior to joining the program and to prevent them from selecting as their baseline a year in which their emissions were particularly high, hence making reductions appear steeper than they actually were, relative to average conditions. Reflecting various combinations of the three characteristics, the firms’ goals are expressed in different terms. For example, Cinergy Corporation pledged to reduce its total domestic greenhouse gas emissions by 5 percent from 2000 to 2010, while Miller Brewing Company pledged to reduce its domestic greenhouse gas emissions by 18 percent per barrel of production (a unit of production intensity goal) from 2001 to 2006, and Pfizer, Inc., pledged to reduce its worldwide emissions by 35 percent per dollar of revenue (a monetary intensity goal) from 2000 to 2007. Table 2 presents information on the 38 firms’ goals. According to program officials, DOE or another federal agency, such as EPA or the U.S. Department of Agriculture (USDA), conducted discussions with the industry groups on establishing a goal upon entering the program. These officials stated that, since a key element of the program is allowing industry groups to take ownership of their goals, DOE and its partner agencies generally did not actively negotiate the goals’ specific terms. DOE officials told us that the agency remained flexible on goal setting because some groups had initiated their own internal emissions reduction programs before joining the program or had an existing arrangement with another agency, such as EPA. In addition, DOE officials believe it is important for the groups to establish goals that meet their unique circumstances. The officials told us that they compared the trade groups’ goals with projected emissions for their respective industries to gauge their robustness. DOE calculates expected conditions for many industrial sectors using EIA data, where they are available. (We did not independently review EIA’s data or DOE’s analysis of the data.) Further, DOE officials also told us that the trade groups have an interest in ensuring that their goals are credible. According to a DOE official, participants need not establish a new goal as a condition of joining the program, and certain trade groups had already initiated internal emissions reduction programs before joining Climate VISION or had an existing arrangement with a voluntary program at another agency, such as EPA. For example, the nine firms in the aluminum industry established a goal of reducing perfluorocarbon emissions by 30 to 60 percent from a 1990 baseline as part of EPA’s Voluntary Aluminum Industrial Partnership. In 2003, as part of Climate VISION, the Aluminum Association updated this goal. Similarly, the Semiconductor Industry Association’s goal was established in 1999, also in conjunction with an EPA program. The International Magnesium Association likewise participates in an EPA program but did not establish a quantitative goal for reducing emissions until it joined Climate VISION in 2003. Fourteen groups established quantitative emissions-related goals. More specifically, nine pledged to take actions to improve their emissions intensity. For example, the American Forest & Paper Association stated that it expected to reduce emissions intensity by 12 percent between 2002 and 2012. Another two groups aimed to reduce emissions of specific greenhouse gases. For example, the Semiconductor Industry Association pledged to support efforts to reduce PFC emissions by 10 percent over 1995 levels by 2010. Two more groups established a goal for improving energy efficiency. For example, the American Iron and Steel Institute agreed to a 10 percent, sectorwide increase in energy efficiency by 2012, relative to 2002. Finally, one industry—the National Mining Association—established a goal of both reducing its overall emissions and improving its energy efficiency. The Business Roundtable did not set a quantified emissions reduction goal, owing to the diversity of its membership. Table 3 outlines the type and time frame of industry group goals. Methane emissions in million metric tons carbon dioxide equivalent per year Million metric tons of carbon equivalent Transportation-related greenhouse gas emissions intensity adjusted for traffic levels in ton miles PFC emissions in million metric tons of carbon equivalent emissions by 2010 and did not define a baseline year because of the nature of its goal. As shown in table 3, the majority of the groups’ goals were based on time frames that began shortly before the program’s initiation in 2003. Specifically, nine groups used 2000 or 2002 as a base year. For example, the National Lime Association stated its intention to reduce emissions intensity by 8 percent between 2002 and 2012. However, four goals had a base year of 1995 or earlier. For example, the Portland Cement Association pledged to reduce its emissions intensity by 10 percent between 1990 and 2020. DOE officials told us that, even though some participants are using 1990 or another pre-2003 year as a base year, DOE will count only reductions occurring between 2002 and 2012 as part of the program’s contribution toward the President’s 18 percent emissions intensity reduction goal. In addition to setting emissions-related goals, some groups also set other kinds of goals. For example, the American Petroleum Institute committed to 100 percent member participation in EPA’s voluntary Natural Gas STAR program (which helps U.S. natural gas companies adopt technologies and practices to reduce emissions of methane) and DOE’s Combined Heat and Power Program (which works to eliminate barriers to the adoption of combined heat and power technology systems.) Similarly, the Business Roundtable established a goal of 100 percent member participation in voluntary actions to reduce, avoid, offset, and sequester greenhouse gas emissions. Although all Climate VISION participants established goals, a majority of the groups qualified their participation by stating that their ability to meet their goals would depend on some reciprocal government action. This includes 9 of the 14 groups with a quantitative goal as well as 5 of the 7 electric power groups. For example, the American Chemistry Council stated that “it will be difficult, if not impossible, for the chemical industry to do its share to reach the President’s goal of reducing emissions intensity” without an aggressive government role in removing barriers to progress and providing incentives, such as tax code incentives. Similarly, the American Petroleum Institute stated that “future progress will be particularly difficult because of the increased energy and capital requirements at refineries due to significant tightening of gasoline and diesel fuel specifications in the coming decade.” The group said it would look to the administration “to aggressively work to eliminate any potential regulatory barriers to progress in these areas.” Likewise, the Association of American Railroads stated that the industry’s efforts will depend upon DOE’s continued funding of a government/rail industry cooperative venture to improve railroad fuel efficiency. Appendix III lists the reciprocal federal actions outlined in participants’ statements. EPA and DOE both estimated the share of U.S. greenhouse emissions attributable to their participants. Both agencies are also working to estimate the effect of their programs on reducing emissions, and they expect the estimates to be completed in 2006. Preparing such estimates will be challenging because there is considerable overlap between these two programs and other voluntary programs. EPA estimated in 2005 that participating firms accounted for at least 8 percent of U.S. emissions on average for the years 2000 through 2003. EPA based this estimate on emissions data from the first 50 program participants and believes the estimate is conservative, in part, because (1) it does not reflect data from the other 24 participating firms and (2) it does not include all types of emissions from each firm. For example, the estimate does not include indirect emissions (such as emissions from the use of purchased electricity or steam) or what EPA refers to as “optional” emissions, such as employee commuting and employee business travel. Because the electric utility sector accounts for about one-third of U.S. greenhouse emissions, we used an EPA database to determine the share of greenhouse gas emissions produced by Climate Leaders firms in that sector. As shown in table 4, we found that participating firms accounted for nearly 18 percent of carbon dioxide emissions from U.S. electricity generation (i.e., power plants only) in 2000 (latest available data), or about 6 percent of total U.S. emissions. EPA program managers said they have set a participation goal of 200 firms by 2012, and EPA is almost on track to meet this goal. However, a program manager told us that EPA has not tried to estimate the share of U.S. emissions that the 200 firms might account for because it is difficult to predict with any accuracy the size and types of firms that may join the program in the future and the firms’ emissions reduction goals. Climate Leaders program staff, with assistance from contractors, recruit new participants through various means. For example, they attend industry sector meetings and corporate environmental meetings as well as meetings of participants in other EPA programs, such as Energy STAR. In addition, EPA publishes public service announcements in trade and industry journals. According to DOE, the thousands of individual companies that are members of the participating trade groups (not including Business Roundtable members) contribute over 40 percent of total U.S. greenhouse gas emissions. DOE officials told us they believe this estimate, based largely on EIA and EPA data, is conservative, because the utility sector alone accounts for one-third of U.S. greenhouse gas emissions. (We did not independently review EIA’s or EPA’s data or the estimate based on these data.) DOE officials told us that they regularly seek to recruit new members and expect at least one more trade group to join the program, but they do not have a specific goal for the number of new participants expected to join. DOE also does not have a goal for the share of U.S. emissions contributed by future participants. EPA and DOE are working, as part of an interagency program, to estimate their programs’ effect on reducing U.S. greenhouse gas emissions. Agency officials said that the estimates would be completed in 2006, in fulfillment of a U.S. commitment under the 1992 Framework Convention on Climate Change. (Under the Convention, the United States committed to report periodically on policies and measures undertaken to reduce greenhouse gas emissions.) In 2005, EPA estimated that participating firms’ actions were reducing U.S. emissions by 8 MMTCE a year. This amount is equivalent to the annual emissions of 5 million automobiles and represents less than one-half of 1 percent of U.S. emissions in 2003 (the latest year for which data are available.) EPA derived this estimate by adding up the average annual expected emissions reductions for the first 35 firms that had set goals. (Three other firms set goals later.) However, EPA officials cautioned that this figure does not represent an official estimate of emissions reductions attributable to the program because many Climate Leaders firms participate in other voluntary programs to which their emissions reductions may be credited. A DOE official said that, to determine the emissions reductions attributable to the Climate VISION program, DOE will compare participating trade groups’ reported emissions with comparable EIA projections for the time period. If the trade group comprises an entire industry, DOE will use the EIA projection for the entire industry; if the trade group comprises less than the entire industry, DOE will prorate the industry total based on the trade group’s share of the industry. Estimating the effect of the two programs, as opposed to other voluntary programs and other factors, will be challenging for two reasons. First, because the firms and trade groups participating in these two programs may also participate in other voluntary programs, it will be difficult to determine the two programs’ effect on reducing emissions, as opposed to other programs’ effects on reducing emissions. Unless EPA and DOE find an effective way to disaggregate the emissions reductions attributable to each program, there is the possibility that total emission reductions from voluntary federal programs will be overstated because the same emissions reductions reported by organizations participating in Climate Leaders, Climate VISION, and other programs will be counted by more than one program. EPA officials told us that they recognize the challenge of attributing the effects of the various voluntary programs and stated that they are trying to avoid double counting of the programs’ results. Second, the reductions in a participants’ emissions that are due to a program are the difference between its actual emissions generated during a period of time and the amount of emissions that it would have generated for that period if it were not participating in the program. Although a participant can estimate its future emissions based on its estimate of future conditions (e.g., energy prices and other factors), all of these conditions may change during the time period. Any such change would need to be assessed to determine how it might have affected the participant’s emissions. There are three types of overlap involving the firms and trade groups participating in Climate Leaders and Climate VISION. First, as of November 2005, most Climate Leaders firms also participate in other voluntary EPA programs. Specifically, 60 of the 74 firms took part in one or more other programs, while the other 14 firms did not take part in any other programs, as shown in figure 2. Of the 60 firms, 36 took part in one to three other voluntary climate programs. For example, Calpine participated in three programs, including the Combined Heat and Power Partnership, and Natural Gas STAR. Another 18 firms participate in four to six other programs. For example, Cinergy Corporation participated in EPA’s Coalbed Methane Outreach Program, Combined Heat and Power Partnership, and Natural Gas STAR, among others. Additionally, six firms participate in seven or more programs. IBM, for example, participates in 11 other programs, including Energy STAR and the PFC Emissions Reduction Partnership for the Semiconductor Industry. Second, some firms participating in Climate Leaders are members of trade groups participating in Climate VISION. We identified such firms in the automobile manufacturing, cement, electric power, and paper industries. For example, General Motors, a Climate Leaders participant, is a member of the Alliance of Automobile Manufacturers, a Climate VISION participant. Finally, three of the Climate VISION trade groups also participate in EPA voluntary programs. Specifically, the Aluminum, Magnesium, and Semiconductor Associations also participate in industry-focused EPA programs. Further, the Aluminum and Semiconductor Associations previously developed their goals in conjunction with other EPA voluntary programs. The fact that there is overlap among the organizations participating in both Climate Leaders and Climate VISION, and among participants in these programs and other federal voluntary programs, creates the possibility that their emissions reductions will be counted more than once. For example, the emissions reductions claimed by firms participating in Climate Leaders who are also members of trade groups participating in Climate VISION may be counted twice—the individual firm’s achievement may be credited under the Climate Leaders program, while the same achievement may be counted toward the trade group achieving its goal under Climate VISION. Further, for those trade groups that participate in Climate VISION and other EPA voluntary programs, it is possible that the same actions and the same emissions reductions will be counted by both programs. If participants’ emissions reductions are counted by multiple programs, it is possible that any attempt to estimate the overall impact of voluntary federal climate change programs on greenhouse gas emissions will be overstated. In addition, it will be challenging to accurately estimate the programs’ effects because it is difficult to determine the level of emissions for a firm or trade group in the absence of these programs and other factors. For example, increases in energy prices can be expected to reduce energy consumption, which is significant because carbon dioxide emissions from energy use account for more than 80 percent of U.S. emissions. According to EIA’s 2002 estimate, which was reflected in the President’s February 2002 plan, U.S. emissions intensity was projected to improve 14 percent by 2012. However, according to EIA’s 2006 estimate, largely because of an increase in energy prices, emissions intensity is now projected to improve 17 percent over the same period. If participants had anticipated such an improvement, they might have projected lower emissions over time. This means that the difference between their reported emissions and their projected emissions would be smaller, which would decrease the emissions reductions attributable to participation in a voluntary program. The administration has chosen to pursue voluntary rather than mandatory activities to reduce greenhouse gas emissions. Given the potential gravity of the climate change problem, programs such as Climate Leaders and Climate VISION will need to be especially robust and involve a substantial portion of the economy if they are going to achieve the desired results. To date, according to EPA and DOE estimates, these two voluntary programs involve companies and industries representing less than one-half of total U.S. emissions, which immediately limits their potential impact. This makes it all the more important that the voluntary programs maximize the extent to which their potential is achieved. To this end, we found that opportunities remain to improve the management of both programs. First, while many participants appear to have made considerable progress in completing program steps in a timely manner, some participants in both programs appear not to be progressing at the rate expected by the sponsoring agencies. For example, although EPA expects that firms will generally take about 2 years to establish their emissions reduction goals, of the 51 firms that joined in 2002 and 2003, the first 2 years of the program, 18 firms had not done so as of November 2005. Second, while 12 of these 18 firms are currently negotiating their goals with EPA, 6 others had not begun negotiations because their inventories had not been finalized. Similarly, although DOE expects that groups will generally complete their work plans within about a year of joining the program, of the 13 groups that joined during 2003, the program’s first year, 2 had not completed their plans as of November 2005. EPA is developing a system for tracking firms’ progress in completing key steps under Climate Leaders, but DOE does not have a system for tracking trade group’s progress under Climate VISION. We believe that, without a system to track how long participants take to complete key program steps, DOE cannot ensure that the program’s goals are being accomplished. Moreover, neither agency has a written policy on what action to take when a firm is not making sufficient progress in setting goals and completing other key program steps. We believe that, by establishing written policies regarding consequences for not completing these steps on schedule, the agencies could more easily ensure participants' active involvement in the programs, thereby increasing the opportunities for contributing to the President’s emissions intensity reduction goal. Both agencies are working this year to estimate the emissions reductions attributable to their programs. No matter how many firms and trade groups have joined the programs and how well they are meeting program expectations, to demonstrate the value of voluntary programs—as opposed to mandatory reductions—the agencies will need robust estimates of the programs’ effect on reducing emissions. However, as we noted, making this estimate will be challenging for two reasons. First, the overlaps between organizations participating in these two programs and other voluntary programs make it difficult to attribute specific emissions reductions to one program. EPA and DOE will need to find a way to determine the emissions reductions attributable to each program so that the same emissions reductions reported by organizations participating in Climate Leaders, Climate VISION, and other voluntary programs are not counted by more than one program. Otherwise, estimates of total emission reductions from voluntary federal programs could be overstated. Second, it will be difficult to determine the emissions reductions stemming from participants’ involvement in the program, as opposed to higher energy prices or other factors, because it is difficult to determine what participants’ emissions would be in the absence of these programs. It will therefore be difficult to evaluate the merits of these voluntary programs. Nevertheless, it will be important for the agencies to overcome these challenges in determining their programs’ emission reduction contributions. To ensure that the Congress and the public have information with which to evaluate the effectiveness of these voluntary programs and to increase the opportunities for contributing to the President’s emissions intensity reduction goal, we are recommending that DOE develop a system for tracking participants’ progress in completing key steps associated with the program. We are also recommending that both EPA and DOE develop written policies establishing the consequences for not completing program steps on schedule. We provided a draft of this report to EPA and DOE for their review and comment. EPA did not comment on our recommendation, but rather provided a summary of the program’s accomplishments, noting that 85 firms now participate in Climate Leaders and that 5 firms had met their emissions reduction goals (see app. IV). DOE stated that, overall, the draft report provided a useful overview of the Climate VISION program and agreed with our recommendation regarding a tracking system and said it will consider our recommendation regarding establishing a written policy (see app. V). However, DOE stated that the Climate VISION Web page contains a wealth of information on the program, which may be sufficient to ensure the active involvement of participating groups. Because DOE’s Web site does not contain information regarding the expected time frames for completing key program steps or the consequences for groups not meeting the agency’s expectations, we continue to believe that DOE should establish a written policy regarding what actions it will take when a trade group is not making sufficient progress in completing key steps. Although DOE agreed with our statement that Climate VISION participants account for at least 40 percent of total U.S. greenhouse gas emissions, it noted that the program covers about four-fifths of total U.S. industrial- and power-related greenhouse gas emissions, which makes the potential impact of the program substantial. Also, although DOE agreed that higher energy prices may lead to lower emissions overall, it noted that, in the power sector, higher energy prices may lead to greater emissions. This can occur if electric power producers use less oil or natural gas (which produce fewer emissions per unit of electricity) and more coal (which produces more emissions, relative to oil or natural gas). Both EPA and DOE provided technical comments, which we have incorporated in this report as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies to the Secretary of Energy; the Administrator, EPA; and other interested officials. The report will also be available on GAO's home page at http://www.gao.gov. If you have questions concerning this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. In addition to Climate Leaders and Climate VISION, the U.S. government supports numerous other voluntary programs that encourage participants to reduce their greenhouse gas emissions, as shown in the following table, arranged alphabetically by sector. For the purposes of this report, we define voluntary greenhouse programs as those programs that do not involve regulation, government-sponsored research and development, tax incentives, financial assistance, or government/industry cost-sharing components; were created for the specific purpose of reducing greenhouse gases or were created to reduce other pollutants but had the additional benefit of reducing greenhouse gases; and involve only dissemination of information to nonfederal parties. Increase demand for, and bring new, highly efficient technologies to market for buyers, while assisting manufacturers, energy service companies, and utilities. The focus is on highly energy-efficient products for commercial and residential building applications. Promote strategies for strong energy management by engaging top company leadership, promoting standardized measurement tools to assess performance of buildings, and providing information on best practices in energy efficiency. Provide information to consumers and homeowners so that they can make sound investments when buying a new home or when undertaking a home improvement project. Provide guidance for homeowners on designing efficiency into kitchen, additions, and whole-home improvement projects and work with major retailers and other organizations to help educate the public. Promote energy efficiency and renewable energy use in federal buildings, facilities, and operations. Record the results of voluntary measures undertaken by companies and other organizations to reduce, avoid, or sequester greenhouse gas emissions. Offer industry tools to improve plant energy efficiency, enhance environmental performance, and increase productivity. Enable industrial companies to evaluate and cost-effectively reduce their energy use through established energy performance benchmarks, strategies for improving energy performance, technical assistance, and recognition for accomplishing reductions in energy. Provide no-cost energy assessments to small- and medium- sized manufacturers to help identify opportunities to improve productivity, reduce waste, and save energy. Advocate employer-provided commuter benefits and highlight the efforts of employers to help get employees to work safely, on time, and free of commuter-related stress. Advance the Nation’s economic, environmental, and energy security by supporting local decisions to adopt practices that contribute to the reduction of petroleum consumption. Reduce emissions from the freight sector by creating partnerships in which partners commit to measure and improve the efficiency of their freight operations using EPA-developed tools, reducing unnecessary engine idling, and increasing the efficiency and use of rail and intermodal operations. Reduce emissions from livestock waste management operations by promoting the use of biogas recovery systems. Reduce emissions by promoting the profitable recovery and use of coal mine methane by coal mining and other types of companies. Aim to limit emissions of HFCs, PFCs, and SF in several industrial applications: semiconductor production, refrigeration, electric power distribution, magnesium production, and mobile air conditioning. Reduce emissions from U.S. natural gas systems through the widespread adoption of industry best management practices. Promote the use of landfill methane gas as a renewable, green energy source. The program’s focus is on smaller landfills not regulated by EPA’s New Source Performance Standards and Emissions Guidelines. Encourage recycling and waste reduction for the purpose of reducing greenhouse gas emissions. Provide technical assistance for waste prevention, recycling, and buying recycled products. Encourage states to develop and implement a comprehensive strategy for using new and existing energy policies and programs to promote energy efficiency, renewable energy, and other clean energy sources. Enable state and local decision makers to incorporate climate change planning into their priority planning to help them maintain and improve their economic and environment assets. This initiative cuts across all sectors and greenhouse gas emissions sources. However, for the sake of simplicity, we list it here under commercial and residential energy. This initiative consists of six separate programs: the Voluntary Aluminum Industrial Partnership, the HFC-23 Emission Reduction Program, the PFC/Climate Partnership in the Semiconductor Industry, the SF6 Emissions Reduction Partnership for Electric Power Systems, the SF6 Emission Reduction Partnership for the Magnesium Industry, and the Mobile Air Conditioning Climate Protection Partnership. To determine the steps participants are expected to complete under each program and the expected time frames for completion, we reviewed agency documents, where available, and interviewed agency officials within the Environmental Protection Agency’s (EPA) Office of Air and Radiation and the Department of Energy’s (DOE) Office of Policy and International Affairs. We also obtained energy and emissions intensity data from Energy Information Administration (EIA) staff. To ascertain the extent to which agency officials assist participants in setting emissions reduction goals and the types of goals established, we reviewed agency documents and interviewed agency officials. We also reviewed commitment letters sent to DOE by the various trade groups, since each group prepared individualized letters, but we did not review the paperwork submitted by Climate Leaders participants to EPA, since each firm signed a standardized membership agreement with EPA. To determine the extent to which participants’ reductions are reported in each program, we reviewed agency guidance on reporting and verification and interviewed agency officials. In addition, we reviewed the recommended reporting protocols for each program, including EPA’s Design Principles, which is EPA’s emissions reporting guidance, and DOE’s Draft Technical Guidelines for Voluntary Reporting of Greenhouse Gases Program. We also reviewed EPA’s annual greenhouse gas inventory summary and goal tracking form, the Inventory Management Plan (IMP) desktop review form, the on-site IMP review facility selection form, and the IMP on-site review form. To determine how EPA quantified the share of U.S. greenhouse gas emissions covered by Climate Leaders and the total reductions expected from the program, we interviewed EPA staff. To assess the size of the electricity generating sector participating in Climate Leaders, we used EPA’s e-GRID database, which contains information on the environmental characteristics of almost all electric power generated in the United States. To ascertain how DOE quantified its estimate of Climate VISION coverage, we reviewed DOE documents and interviewed DOE staff. To determine the agencies’ plans for future coverage and impact, we reviewed performance plans and an annual report (for EPA) and interviewed agency officials for both agencies. To assess the reliability of the EPA, DOE, and other data, we talked with agency officials about data quality control procedures and reviewed relevant documentation. We determined the data were sufficiently reliable for the purposes of this report. To ascertain how many firms participating in Climate Leaders also participate in other EPA voluntary climate programs, we cross-referenced a Climate Leaders roster against EPA lists of membership in other EPA voluntary programs. Similarly, we reviewed membership in DOE’s Climate VISION program and cross-referenced selected individual trade group members with the list of Climate Leaders members. Finally, to create a list of other government-sponsored, voluntary greenhouse gas emissions reduction programs, we requested information from EPA on all current U.S. policies and measures designed to reduce greenhouse gas emissions. We narrowed the list to those programs that were voluntary. We defined voluntary programs to include only those programs in which private sector parties agree, of their own free will, to reduce greenhouse gas emissions. Therefore, we excluded regulatory programs. We also excluded programs consisting primarily of research and development, tax incentive, or financial assistance, and government/industry cost share arrangements. However, we determined that voluntary programs can include programs in which the government provides information to private sector parties, individuals, or state and local governments. We also included programs that were created both for the specific purpose of reducing greenhouse gas emissions and that were created to reduce other pollutants but have as a side benefit the reduction of greenhouse gases. We included programs that are supported by the Departments of Agriculture, Energy, and Transportation, as well as EPA. We conducted our review from June 2004 through March 2006 in accordance with generally accepted government auditing standards. “Clearly, achievement of this commitment and the national goal will depend on a number of external factors, including economic stability, coordinated regulatory policies that avoid mandates and other market barriers, weather variations which skew energy use, and support from the utilities’ energy mix, including emission factors reductions.” Aluminum Association No qualifying statement noted. “ . . . government can help by removing barriers that impede efficiency upgrades and by providing incentives for companies to implement state-of-the-art technology. Without an aggressive government role in removing barriers to progress and providing incentives, it will be difficult, if not impossible, for the business of chemistry to do its share to reach the president’s goal of reducing national greenhouse gas intensity by 18 percent during the 2002-2012 timeframe.” “As an organization, we believe that our success will depend in part on the Administration’s efforts to rationalize and manage the activities of all government agencies, especially with respect to the promulgation of regulatory requirements that may result in increases in greenhouse gas emissions. Our commitment also will naturally depend on the parameters of any implementation guidelines that may be developed. Specifically, we strongly encourage the Administration to address regulatory requirements where the negative climate impacts outweigh any environmental benefits.” “We propose to use the Roadmap goals as a basis for addressing the President’s Business Challenge. The Roadmap goals, however, are expressed in terms of technical feasibility and are qualified by the fact that the cost of acquiring and implementing any new technology must be economically justifiable for it to achieve widespread adoption in the industry.” “Future progress will be particularly difficult because of the increased energy and capital requirements at refineries due to significant tightening of gasoline and diesel fuel specifications in the coming decade. As part of this program, API will look to the Administration to aggressively work to eliminate any potential regulatory barriers to progress in these areas.” “Most recently we have embarked on a cooperative venture with DOE’s Office of Energy Efficiency and Renewable Energy to explore methods of improving railroad fuel efficiency. . . The industry’s efforts, of course, will also depend upon DOE’s funding the above-described government/rail industry cooperative venture to improve railroad fuel efficiency as DOE had previously indicated it was prepared to do. . . We concur with DOE that industry expertise and in-kind contributions—coupled with federal government funding and the resources of DOE’s national laboratories—are necessary for an effective program to be planned and executed.” No qualifying statement noted. “We encourage the Administration to do all that it can to support the domestic soda ash, borates, and sodium silicates industries, not only because they contribute significantly to the U.S. economy, but also because they are more protective of the environment than their competitors outside the U.S. Shifts in production to the U.S. from offshore producers of soda ashes, borates, and sodium silicates would decrease the world’s production of greenhouse gases.” No qualifying statement noted. “There is much that the government can do to address regulatory barriers that inhibit progress towards these goals, as well as to support voluntary efforts by the lime industry . . . In particular, we encourage the Administration to rationalize and manage the implementation of regulations that impede the permitting of projects to improve the efficiency and environmental performance of lime manufacturing operations.” (Attached is a list of specific activities that will enhance the ability of the Lime Association to meet its Climate VISION goals. These activities include regulatory streamlining, government assistance in obtaining permits to use alternative fuels; tax code improvements in two areas; funding assistance for small businesses; assistance in persuading some lime customers to accept changes in product characteristics resulting from GHG intensity reductions; and assurance that domestic companies do not lose market share to foreign industries). No qualifying statement noted. No qualifying statement noted. Some of the seven members of the Power Partners coalition included, in their individual commitment letters, expectations of the federal government. For example: The American Public Power Association and the Large Public Power Council joint letter states that, “Full realization hinges on achieving targeted reforms to the current Federal Energy Regulatory Commission (FERC) regulatory process.”. . . and “ Although estimates vary, opportunities exist to improve the generation efficiency of existing coal-fired capacity by 4 to 8 percent. . . Our ability to implement such energy efficiency projects will hinge on removal of regulatory barriers to such projects under the Clean Air Act.” The Edison Electric Institute (EEI) states that, “A combination of power sector and government efforts will be necessary, including . . . government laws, regulations, and policies favoring the full utilization or maintenance of nuclear and hydroelectric plant generating capacity; adequate supplies and delivery infrastructure for natural gas; economic incentives for renewables; and the full benefits of energy efficiency and DSM, as well as offset projects.” Attached to the letter is a list of specific government policies that would help EEI meet its goals. These policies include, among other things, hydroelectric licensing reform, nuclear power plant licensing extensions, reform of New Source Review regulations under the Clean Air Act, transmission siting authority for the federal government, and tax policies, such as accelerated depreciation and amortization of pollution control equipment and tax credits for renewable energy. The Electric Power Supply Association states that, “EPSA member companies are committed to utilizing this generation capacity to the fullest extent possible and will work diligently to develop and maximize electricity production for clean energy sources to levels that are necessary to achieving the greenhouse gas intensity goals outlined above. The ability of our members to realize these industry goals is tied to the advancement of policies for promoting competitive markets for electricity. Specifically, it depends on actions and policies to expand wholesale electric competition and rationalize regulations, such as Federal Energy Regulatory Commission’s standard electric market design and Regional Transmission Organization initiatives; advance market-based multi-emissions legislation; streamline current regulatory programs, and seek better disclosure and market transparency.” The Nuclear Energy Institute states that, “The nation’s ability to realize the promise of nuclear energy after 2012 will depend on actions and policies we undertake in the next one to two years, particularly new policy initiatives designed to stimulate investment in technologies that require large capital investments and long lead times.” As part of the SIA Memorandum of Understanding with EPA, EPA’s responsibilities include: (1) participating in and supporting conferences to share information on emission reduction technologies; (2) addressing regulatory barriers that may impede voluntary, worldwide emission reduction strategies; (3) recognizing SIA and the participating companies for their emission reduction commitment, technical leadership, and achievements over time. In addition to the contact named above, David Marwick, Assistant Director; John Delicath; Anne K. Johnson; Chester Joy; Micah McMillan; and Joseph D. Thompson were the major contributors to this report. Kisha Clark, Heather Holsinger, Karen Keegan, Jean McSween, Bill Roach, and Amy Webbink also made important contributions. | To reduce greenhouse gas emissions linked to climate change, two voluntary programs encourage participants to set emissions reduction goals. The Climate Leaders Program, managed by the Environmental Protection Agency (EPA), focuses on firms. The Climate VISION (Voluntary Innovative Sector Initiatives: Opportunities Now) Program, managed by the Department of Energy (DOE) along with other agencies, focuses on trade groups. GAO examined (1) participants' progress in completing program steps, the agencies' procedures for tracking progress, and their policies for dealing with participants that are not progressing as expected; (2) the types of emissions reduction goals established by participants; and (3) the agencies' estimates of the share of U.S. greenhouse gas emissions that their programs account for and their estimates of the programs' impacts on U.S. emissions. EPA expects Climate Leaders firms to complete several program steps within general time frames, but firms' progress on completing those steps is mixed. For example, EPA asks firms to set an emissions reduction goal, generally within 2 years of joining. As of November 2005, 38 of the program's 74 participating firms had set a goal. Of the 36 firms that had not set a goal, 13 joined in 2002 and thus took longer than expected to set a goal. EPA is developing a system for tracking firms' progress in completing these steps, but it has no written policy on what to do about firms that are not progressing as expected. Trade groups generally established an emissions reduction goal before joining Climate VISION, and DOE generally expects them to develop a plan for measuring and reporting emissions within about 1 year of joining. As of November 2005, 11 of the 15 participating groups had such a plan, but 2 of the groups without a plan joined in 2003, the program's first year. DOE has no means of tracking trade groups' progress in completing the steps in their plans and no written policy on what to do about groups that are not progressing as expected. A tracking system would enable the agency to ascertain whether participants are meeting program expectations in a timely manner, thereby helping the program to achieve its goals. By establishing a written policy on the consequences of not progressing as expected, both agencies could better ensure that participants are actively engaged in the programs, thus helping to achieve the programs' goals. The types of emissions reduction goals established by Climate Leaders firms and Climate VISION groups vary in how reductions are measured and the time periods covered, among other things. For example, one Climate Leaders firm's goal is to reduce its domestic emissions by 5 percent over 10 years; another's is to reduce its worldwide emissions per dollar of revenue by 35 percent over 7 years. Similarly, one Climate VISION group's goal is to reduce emissions of one greenhouse gas by 10 percent, while another's is to reduce its emissions per unit of output by 12 percent. GAO noted that some Climate VISION groups said meeting their goals may be linked to reciprocal federal actions, such as tax incentives or regulatory relief. EPA officials estimated that the first 50 firms to join Climate Leaders account for at least 8 percent of U.S. greenhouse emissions. DOE estimated that Climate VISION participants account for at least 40 percent of U.S. greenhouse gas emissions. EPA and DOE are working through an interagency process to quantify the emissions reductions attributable to their programs; the process is expected to be completed in 2006. However, determining the reductions attributable to each program will be challenging because of the overlap between these programs and other voluntary programs, as well as other factors. |
CBP is the largest uniformed law enforcement agency in the United States, with approximately 21,400 BPAs patrolling between the nation’s ports of entry and more than 20,000 CBPOs stationed at air, land, and seaports nationwide at the end of fiscal year 2011.southwest border, there are about 5,500 CBPOs and 18,000 BPAs as of the end of fiscal year 2011. CBPOs, based within OFO, are responsible for processing immigration documentation of passengers and pedestrians and inspecting vehicles and cargo at U.S. ports of entry. BPAs are based within the USBP and are responsible for enforcing immigration laws across the territory in between the ports of entry and at checkpoints located inside the U.S. border. Together, CBPOs and BPAs are On the U.S. responsible for detecting and preventing the illegal entry of persons and contraband, including terrorists and weapons of mass destruction, across the border. U.S. citizens interested in becoming CBPOs or BPAs must successfully complete all steps of the CBP hiring process, which includes an online application, a cognitive exam, fingerprint collection, financial disclosure, a structured interview, fitness tests, medical examinations, a polygraph examination, a background investigation, and a drug test. CBP IA’s PSD manages the personnel security program by initiating and adjudicating preemployment investigations for CBP applicants, which aim to ensure that the candidates are reliable, trustworthy, and loyal to the United States, and therefore suitable for employment. In addition, CBP IA’s Credibility Assessment Division (CAD) is responsible for administering the polygraph examinations, interviewing applicants, and collecting any admissions that an applicant may reveal including past criminal behavior or misconduct. Human Resource Management is responsible for making the hiring decisions based on the final suitability determination from CBP IA (this includes PSD’s overall assessment of the polygraph examination and background investigation), as well as the applicant’s successful completion of the other steps in the hiring process. The number of CBP employees increased from 43,545 in fiscal year 2006 to 60,591 as of August 2012. During this time period, both OFO and USBP experienced a hiring surge and received increased appropriations to fund additional hiring of CBPOs and BPAs. The majority of the newly hired CBPOs and BPAs were assigned to the southwest border. In particular, during this time period, their total numbers along the southwest border increased from 15,792 to 24,057. As of fiscal year 2011, 57 percent of the CBPOs and BPA were stationed along the southwest border. Figure 1 provides additional details. Allegations against CBP employees for misconduct, corruption, or other issues can be reported through various mechanisms. CBP IA, in partnership with the Office of Professional Responsibility—an office within DHS’s U.S. Immigration and Customs Enforcement—accepts allegations through the Joint Intake Center (JIC). JIC is CBP’s central clearinghouse for receiving, processing, and tracking all allegations of misconduct involving personnel and contractors employed by CBP. Staffed jointly by CBP IA and the Office of Professional Responsibility, JIC is responsible for receiving, documenting, and routing misconduct allegations to the appropriate investigative entity for review to determine whether the allegation can be substantiated. CBP employees or the general public may report allegations to JIC’s hotline by e-mail or telephone, to local CBP IA field offices, the DHS Office of Inspector General, or the other law enforcement agencies. Anonymous allegations are also received, documented, and subjected to further inquiry. According to CBP’s data, incidents of arrests of CBP employees from fiscal years 2005 through 2012 represent less than 1 percent of the entire CBP workforce per fiscal year. During this time period, 144 current or former CBP employees were arrested or indicted for corruption—the majority of which were stationed along the southwest border. In addition, there were 2,170 reported incidents of arrests for misconduct.Allegations against CBPOs and BPAs as a percentage of total on-board personnel remained relatively constant from fiscal years 2006 through 2011 and ranged from serious offenses such as facilitating drug smuggling across the border to administrative delinquencies such as losing an official badge. The majority of allegations made against OFO and USBP employees during this time period were against officers and agents stationed on the southwest U.S. border. CBP data indicate that from fiscal year 2005 through fiscal year 2012, the majority of arrests since fiscal year 2005 are related to alleged misconduct activities. A total of 144 current or former CBP employees were arrested or indicted for corruption. In addition, there were 2,170 reported incidents of arrests for misconduct. In both cases, each represents less than 1 percent of the entire CBP workforce per fiscal year. Specifically, in fiscal year 2005, out of 42,409 CBP employees, 27 were arrested or indicted for corruption. In addition, during this time period, there were 190 reported incidents of arrests for misconduct. As of August 2012, when CBP’s workforce increased to 60,591, 11 CBP employees were arrested or indicted for corruption, and there were 336 reported incidents of arrests for misconduct. CBP IA defines delinquent activity as either corruption or misconduct. Corruption involves the misuse or abuse of the employee’s position, whereas misconduct may not necessarily involve delinquent behavior that is related to the execution of official duties. CBP further categorizes the delinquent behavior into the following categories: (1) non-mission-compromising misconduct, (2) mission-related misconduct, (3) corruption, and (4) mission-compromising corruption. The first category is the only one that is unrelated to the execution of the CBP employee’s official duties or authority, and the majority of the incidents of arrests for misconduct (2,153 out of 2,170) since fiscal year 2005 fall in this category. Examples include domestic violence and driving under the influence while off duty. Table 1 provides CBP IA’s definitions of the two types of delinquent activity and examples of each category. About 65 southwest border. Our review of documentation on these cases indicates that 103 of the 144 cases were for mission-compromising corruption activities, which are the most severe offenses, such as drug or alien smuggling, bribery, and allowing illegal cargo into the United States. Forty-one of the 144 CBP employees arrested or indicted were charged with other corruption-related activities. According to CBP IA, this category is less severe than mission-compromising corruption and includes offenses such as the theft of government property and querying personal associates in a government database for purposes other than official business. Table 2 provides a breakdown of these arrests by fiscal year. Table 3 outlines the number of incidents of arrests of CBP employees for misconduct for fiscal years 2005 through 2012. Although the total number of corruption convictions (125) is less than 1 percent when compared with CBP’s workforce population by fiscal year, CBP officials stated that they are concerned about the negative impact employee corruption cases have on agencywide integrity. For example, the Acting Commissioner of CBP testified that no act of corruption within the agency can or will be tolerated and that acts of corruption compromise CBP’s ability to achieve its mission to secure America’s borders against all threats while facilitating and expediting legal travel and trade. In particular, there have been a number of cases in which individuals, known as infiltrators, pursued employment at CBP solely to engage in mission-compromising activity. For example, in 2007, a CBPO in El Paso, Texas, was arrested at her duty station at the Paso Del Norte Bridge for conspiracy to import marijuana into the United States from June 2003 to July 2007, and was later convicted and sentenced to 20 years in prison. OFO reported that she may have sought employment with CBP to facilitate drug smuggling. CBP officials view this case as an example of the potential impact of corruption—if the officer had succeeded in facilitating the importation of 5,000 pounds of marijuana per month, this would amount to a total of 240,000 pounds over 4 years with a retail value of $288 million dollars. In another case, a former BPA previously stationed in Comstock, Texas, was arrested in 2008 for conspiracy to possess, with intent to distribute, more than 1,000 kilograms of marijuana. The agent was convicted in 2009 and sentenced to 15 years in prison and ordered to pay a $10,000 fine. CBP is also concerned about employees who may not be infiltrators, but began engaging in corruption-related activities after joining the agency. For example, CBP IA officials stated that some employees may have experienced personal hardships after being hired, such as financial challenges, which made them vulnerable to accepting bribes to engage in corrupt activity. In addition, some employees arrested for corruption had no prior disciplinary actions at the time of their arrests. According to our analysis of CBP data, from fiscal years 2006 through 2011, a total of 32,290 allegations were made against CBP employees; 90 percent (29,204) were made against CBPOs and BPAs. CBP IA categorizes allegations of misconduct or corruption by varying levels of severity. For example, allegations may range from serious offenses such as facilitating drug smuggling across the border to administrative delinquencies such as losing a badge. CBP allegations of corruption or misconduct are sorted into differing classes depending on the severity of the allegation and whether there is potential for federal prosecution. As table 4 indicates, class 1 allegations comprise the more severe allegations that could lead to federal prosecution, such as drug smuggling or bribery, with classes 2, 3, and 4 representing decreasing levels of severity. Information for management may include notifications such as reporting a lost badge or an arrest of an employee’s family member. CBP management will take this information into consideration but may determine that the action does not warrant referring the case for further disciplinary action. Table 5 depicts the number of allegations against CBPOs and BPAs from fiscal years 2006 through 2011. Allegations made against OFO and BP employees as a percentage of the total OFO and USBP workforce remained constant from 12 percent to 14 percent over fiscal years 2006 to 2011. Similar to the arrest data, of the total number of allegations made against OFO and USBP employees from fiscal year 2006 to fiscal year 2011— 29,204 total allegations—the majority of these allegations were made against officers and agents stationed on the southwest U.S. border. Specifically, there were approximately 19,905 total allegations against CBPOs and BPAs stationed on the southwest border—about 68 percent of total allegations. Approximately 57 percent of all CBPOs and BPAs are stationed along the southwest border. By comparison, during this time period, there were 9,299 allegations made against officers and agents across the rest of CBP’s ports of entry and sectors. According to a senior CBP IA official who is responsible for tracking and maintaining CBP allegations data, it is possible that the southwest border region received more allegations, in part, because CBP assigned more employees to the region, many of whom were new, relatively less experienced agents from the hiring increases from fiscal years 2006 through 2011, or were employees on detail to the southwest border region. During this same period, the number of officers and agents and BPAs along the southwest border increased from 15,792 to 24,057. In addition, in each fiscal year from 2006 through 2011, more allegations were made against USBP employees than OFO employees along the southwest border— allegations against BPAs were about 32 percent higher, on average, than those against CBPOs. CBP employs integrity-related controls to mitigate the risk of corruption and misconduct for both applicants and incumbent officers and agents, such as polygraph examinations and random drug testing, respectively. However, CBP does not maintain or track data on which screening tools provided the information that contributed to applicants being deemed unsuitable for hire, making it difficult for CBP to assess the relative effectiveness of these screening tools. In addition, an assessment of the feasibility of expanding the polygraph program to incumbent officers and agents, and consistent implementation of its quality assurance review program for background investigations and periodic reinvestigations, could strengthen CBP’s integrity-related controls. OFO and USBP have also implemented controls to help detect and prevent corruption and misconduct; however, additional actions could help improve the effectiveness of OFO’s integrity officers. CBP has two key controls to screen applicants for CBPO and BPA positions during the hiring process—background investigations and polygraph examinations. Background investigations involve, among other things, a personal interview; a 10-year background check; and an examination of an applicant’s criminal, credit, and financial history, according to Office of Personnel Management (OPM) regulations. Polygraph examinations consist of a preinterview, the examination, and a postexamination interview. The Anti-Border Corruption Act of 2010 requires that, as of January 2013, all CBPO and BPA applicants receive polygraph examinations before they are hired. CBP IA officials stated that the agency met the mandated polygraph requirement in October 2012—90 days before the deadline. PSD considers multiple factors, or a combination thereof, to determine whether an applicant is suitable for employment. PSD officials stated that suitability determinations are based on three adjudication phases: (1) after PSD verifies that each applicant’s forms are complete and conducts preliminary law enforcement database and credit checks, (2) after CAD reports the technical results of the polygraph examinations to PSD, and (3) after the completion of the background investigation. PSD is responsible for adjudicating the final polygraph examination results, as well as reviewing any other information that may be used in determining whether or not applicants are suitable for employment. If, after the final adjudication, there is no derogatory information affecting an applicant’s suitability, PSD forwards the final favorable adjudication decision to Human Resources Management, which completes the remainder of the required steps in the hiring process. Regarding polygraph examinations, CAD has maintained data on the number of polygraph examinations that it administers and the technical results of those examinations since January 2008. CAD officials stated that an applicant technically fails the polygraph examination by receiving a “significant response” on the test or using countermeasures to deceive the test, which is an indicator of deception and results in PSD making a determination that an applicant is unsuitable for hire. Alternatively, an applicant can technically pass the polygraph examination, but admit to past criminal behavior (e.g., admitting to frequent and recent illegal narcotics usage) that would likely render the applicant unsuitable for CBP employment when PSD adjudicates a complete record of CAD’s polygraph examination and associated interviews. Table 6 provides our analysis of CAD’s data on the 11,149 polygraph examinations administered since 2008, and the technical results of those examinations. In addition to the technical examination results, CAD maintains documentation on admissions that applicants reveal during the polygraph examination process. Applicants have admitted to a range of criminal activity from plans to gain employment with the agency in order to further illicit activities, such as drug smuggling to excessive illegal drug use. For example, one applicant admitted that his brother-in-law, a known Mexican drug smuggler, asked him to use employment with CBP to facilitate cocaine smuggling. Another applicant admitted to using marijuana 9,000 times, including the night before the polygraph examination; cocaine 30 to 40 times; hallucinogenic mushrooms 15 times; and ecstasy about 50 times. CBP IA officials stated that admissions such as these highlight the importance of the polygraph examination to help identify these types of behaviors in applicants before they are hired for CBP employment. CBP IA officials stated that the polygraph examination is the key investigative tool in the agency’s integrity program because it can help identify whether applicants have misled background investigators regarding previous criminal histories or misconduct issues. PSD is responsible for maintaining data on its final suitability determinations—whether or not it determines that applicants are suitable for hire. However, CBP IA does not have a mechanism to track and maintain data on which of its screening tools (e.g., background information or polygraph examination) provided the information that PSD used to determine that applicants were not suitable for hire, making it difficult for CBP IA to assess the relative effectiveness of its various screening tools. For example, if 100 applicants technically pass a polygraph examination, but 60 of these applicants are ultimately found unsuitable for hire, CBP IA does not have data to indicate if the applicants were found unsuitable based on admissions during the polygraph examination, derogatory information collected by background investigators, a combination of this information, or on the basis of other screening tools. PSD officials stated that they do not have the data needed to assess the effectiveness of screening tools because of limitations in PSD’s information management system, the Integrated Security Management System (ISMS), which is not designed to collect data on the source of the information (e.g., background information, polygraph examination) and the results used to determine whether an applicant is deemed suitable for hire. CBP IA’s Assistant Commissioner and other senior staff stated that maintaining these data on an ongoing basis would be useful in managing CBP IA’s programs. Standards for Internal Control in the Federal Government states that program managers need operational data to determine whether they are meeting their goals for accountability for effective and efficient use of resources. Moreover, the standards state that pertinent information should be identified, captured, and distributed in a form and time frame that permits managers to perform their duties efficiently. The standards also require that all transactions be clearly documented in a manner that is complete and accurate in order to be useful for managers and others involved in evaluating operations. which screening tools provide information that contributes to PSD determining that an applicant is not suitable for hire could better position CBP IA to gauge the effectiveness of each tool and the extent to which the tools are meeting their intended goals for screening applicants for hire. GAO/AIMD-00-21.3.1. CBP has two key controls for incumbent employees—random drug testing and periodic reinvestigations—to ensure the continued integrity of the CBPOs and BPAs. CBP is required to conduct random drug tests on an annual basis for at least 10 percent of the employees in designated positions, including CBPOs and BPAs, to help ensure employees who hold positions in the area of law enforcement or public trust refrain from the use of illegal drugs while on or off duty. According to CBP data for fiscal years 2009 through 2011, more than 99 percent of the 15,565 random drug tests conducted on CBP employees were negative. CBP officials stated that actions against those with positive results ranged from voluntary resignation to removal. In September 2012, Human Resource Management officials told us that DHS was in the process of reviewing drug-free workplace programs across the department and that CBP was coordinating with DHS’s drug-free workforce program. Changes under consideration for DHS’s program include eliminating the 2-hour advance notice that employees currently receive before they are required to provide a urinalysis sample, which human resource officials stated could help reduce the possibility of CBP employees potentially engaging in efforts to dilute the results of the tests. In addition, CBP policy states that all CBPOs and BPAs are subject to a reinvestigation every 5 years to ensure continued suitability for employment. control for monitoring incumbent officers and agents, particularly for those employees who were hired in the past without a polygraph examination. CBP policies allows for reinvestigations to be initiated outside of the standard 5-year cycle. As of July 2012, CBP has not conducted any periodic reinvestigations outside of the normal cycle, according to CBP IA officials. CBP IA officials stated that they conducted few periodic reinvestigations during fiscal years 2006 to 2010 because resources were focused on meeting mandated hiring goals. Thus, CBP IA accumulated a backlog of 15,197 periodic reinvestigations as of 2010. To help address this backlog, the Anti-Border Corruption Act of 2010 required CBP to initiate all outstanding periodic reinvestigations within 180 days of the enactment of the law, or July 3, 2011. As of September 2012, CBP IA had initiated 100 percent, and had completed 99 percent (15,027 of 15,197) of the outstanding reinvestigations from the backlog. According to CBP IA officials, 13,968 of the reinvestigations that were completed as of September 2012 have been adjudicated favorably, and CBP officials stated that they had referred three additional cases to the Office of Labor and Employee Relations for possible disciplinary action. CBP IA data indicate, however, that about 62 percent of the favorably adjudicated reinvestigations initially identified some type of issue, such as criminal or dishonest conduct or illegal drug use, which required further review during the adjudication process. According to CBP IA officials, PSD adjudicators mitigated these issues and determined that they did not warrant any referrals to labor and employee relations officials for disciplinary actions. CBP IA officials stated that they are considering implementing a polygraph requirement for incumbent employees; however, CBP has not yet assessed the feasibility of expanding the program beyond applicants. In May 2012, CBP’s Acting Deputy Commissioner testified that the agency is considering whether and how to subject incumbent officers and agents to polygraph examinations. CBP IA officials and supervisory CBPOs and BPAs that we interviewed at all four of the locations we visited expressed concerns about the suitability of the officers and agents hired during the surges because most of these officers and agents did not take a polygraph examination. CBP IA’s Assistant Commissioner also stated that he supports a periodic polygraph requirement for incumbent officers because of the breadth and volume of derogatory information that applicants have provided during the polygraph examinations. The Assistant Commissioner and other senior CBP officials stated that they have begun to consider various factors related to expanding polygraph examinations to incumbent officers and agents in CBP. However, CBP has not yet fully assessed the costs and benefits of implementing polygraph examinations on incumbent officers and agents, as well as other factors that may affect the agency’s efforts to expand the program. For example: Costs. In September 2012, CBP IA officials told us that they had not fully examined the costs associated with different options for expanding the polygraph examination requirement to incumbent employees. To test 5 percent of current eligible law enforcement employees (about 45,000 officer and agents), for example, equates to 2,250 polygraph examinations annually, according to CBP IA. Testing 20 percent of eligible employees each year, by comparison, equates to 9,000 polygraph examinations annually. CBP IA preliminarily identified some costs based on the average cost per polygraph examination (about $800); however, it has not completed analyses of other costs associated with testing incumbent employees, including those associated with mission support specialists, adjudicators, and supervisors who would need to be hired and trained to conduct the examinations. In October 2012, CBP IA officials stated that there would be further costs associated with training polygraph examiners— approximately $250,000 per examiner. CBP has not determined the full costs associated with expanding polygraph examinations to incumbent employees to help assess the feasibility of various options for expansion. Authority and ability to polygraph incumbents. According to OPM requirements, to conduct polygraph examinations on current employees, CBP would need to request and obtain approval from OPM. As of September 2012, CBP had not yet sought approval from OPM to conduct polygraph examinations on incumbent employees because CBP’s senior leadership had not completed internal discussions about how and when to seek this approval. In addition, CBP officials identified other factors that the agency has not yet assessed, which could affect the feasibility of conducting polygraph examinations on incumbent employees. These factors include the need to assess how the agency will use the results of incumbent employees’ polygraphs and whether these options are subject to negotiation with the labor unions that represent CBPOs and BPAs. For example, according to CBP officials, it might be necessary to negotiate with the unions as to what disciplinary action could be taken based on the possible outcomes of the examination, including the test results themselves and any admissions of illegal activity or misconduct made by the employee during the examination. Frequency or number of polygraph examinations to be conducted. According to the CBP IA Assistant Commissioner, the agency has identified possible options for how frequently to implement polygraph examinations for incumbent employees or for what population to conduct the examinations. For example, possible options include conducting polygraph examinations on a random sample of incumbent employees each year (e.g., 5 percent or 20 percent of eligible employees each year), or conducting the examinations based on reasonable suspicion of finding derogatory information. CBP IA officials stated that testing incumbent employees on a random basis could have a deterrent effect by causing some employees to cease their corrupt behavior, and dissuading other employees from becoming involved in corrupt behavior. Although CBP has identified possible options for how frequently to implement polygraph examinations for incumbent employees or for what population to conduct the examinations, CBP officials stated that they have not assessed the feasibility of implementing these options, particularly in light of their relative costs and benefits. Standard practices for project management call for the feasibility of programs to be considered early on. Moreover, standard practices for project management state that specific desired outcomes or results should be conceptualized, defined, and documented as part of a road map. CBP has not fully assessed the feasibility of expanding the polygraph program to incumbent officers and agents, in accordance with standard practices for project management, including assessing all of the associated costs and benefits, options for how the agency will use the results of the examinations, and the trade-offs associated with testing incumbent officers and agents at various frequencies. In October 2012, the CBP IA Assistant Commissioner stated that the agency has begun to discuss options with senior agency officials for expanding its polygraph program. He and other senior CBP IA officials acknowledged that his office had not yet fully assessed the various factors that might affect the feasibility of expanding the polygraph program and agreed that such an assessment would be useful in discussions with CBP senior management. Assessing the feasibility of expanding periodic polygraphs early on in its planning efforts, consistent with standard practices, could help CBP determine how to best achieve its goal of strengthening integrity-related controls over incumbent CBPOs and BPAs. A senior PSD official stated that PSD has not implemented a quality assurance program at the level desired because it has prioritized its resources in recent years to address hiring goals and the mandated requirements to clear the backlog of reinvestigations. PSD established a quality assurance program in 2008 to help ensure that proper policies and procedures are followed during the course of the preemployment background investigations and incumbent employee reinvestigations. As part of this program, PSD is to (1) review, on a monthly basis, no more than 5 percent of all completed investigations to ensure the quality and timeliness of the investigations and to identify any deficiencies in the investigation process, and (2) report the findings or deficiencies in a standardized checklist so that corrective action can be taken, if necessary. As of September 2012, PSD officials stated that they have not consistently completed the monthly checks, as required by the quality assurance program, because they have prioritized their resources to screen applicants to meet CBP’s hiring goals. PSD officials stated that they have performed some of the required checks since 2008. However, PSD officials could not provide data on how many checks were conducted or when the checks were conducted because they did not retain the results of the checks on the required checklists. In addition, CBP IA officials stated that they had performed 16 quality reviews on an ad hoc basis outside of the monthly checks from fiscal years 2008 through 2010. CBP IA documented the results of these ad hoc checks, which did not identify significant deficiencies according to officials. Standards for Internal Control in the Federal Government provides guidance on the importance of evaluating the effectiveness of controls and ensuring that the findings of audits and other reviews are promptly resolved and evaluated within established time frames so that all actions that correct or otherwise resolve the matters have been brought to management’s attention. The standards also state that all transactions and other significant events need to be clearly documented, and the documentation should be readily available for examination. Senior CBP IA officials stated that a quality assurance program is an integral part of their overall applicant screening efforts, and they stated that it is critical for CBP IA to identify and leverage resources to ensure that the quality assurance program is fully implemented on a consistent basis. Without a quality review program that is implemented and documented on a consistent basis, it is difficult to determine the extent to which deficiencies, if any, exist in the investigation and adjudication process and whether individuals that are unsuitable for employment are attempting to find employment with CBP. As a result, it is difficult for CBP to provide reasonable assurance that cases have been investigated and adjudicated properly and that corruption risk to the agency is mitigated accordingly. In addition to CBP’s screening tools for applicants and incumbent employees, OFO and USBP have developed controls to help mitigate the risk of potential CBPO and BPA corruption and misconduct (see table 7). For example, OFO has been able to use upgraded technology at ports of entry to help prevent and detect possible officer misconduct and to monitor officers’ activities while on duty. USBP established a policy that limits the use of portable electronic devices while on duty to mitigate the risks of agents potentially organizing illegal border crossings. Senior USBP officials stated that its agents operate in an environment that does not lend itself to the types of technological controls, such as Red Flag, that OFO has implemented at the ports of entry, which are more confined and predictable environments than Border Patrol environments. For example, BPAs are required to patrol miles of terrain that may be inaccessible to radio coverage by supervisors at the sector offices. CBPOs operate in more controlled space at U.S. ports of entry as opposed to the open terrain across USBP sectors. Nevertheless, USBP officials stated that they are working with AMSCO and CBP IA to identify innovative ways that technology might be used to assist USBP in mitigating the risk of corruption along the border. In addition, in 2009, OFO established the integrity officer position to provide an additional control within the individual field offices. As of August 2012, there were 19 integrity officers across OFO’s 20 field offices; there were 5 officers across the 4 field offices on the southwest border. Integrity officers monitor integrity-related controls, including the Red Flag system and video surveillance cameras. Integrity officers also perform data analyses and provide operational support to criminal and administrative investigations against OFO employees. However, CBP IA officials stated that OFO has not consistently coordinated the integrity officer program with CBP IA, which is the designated lead for all integrity- related matters within CBP. According to a CBP directive, entities within CBP, such as OFO, that are engaged in integrity-related activities must coordinate with CBP IA to ensure organizational awareness and prevent investigative conflicts. CBP IA officials stated that although they are aware of the Integrity Officer program, they expressed concerns that the roles and responsibilities of these officers may not be clearly articulated and thus could result in potential problems, such as jeopardizing ongoing investigations. See Statement of David Aguilar, Acting Commissioner, U.S. Customs and Border Protection, before the Subcommittee on Government Organization, Efficiency, and Financial Management, Committee on Oversight and Government Reform, U.S. House of Representatives. Washington, D.C.: Aug. 1, 2012. responsibilities, including the definition of assisting with operational inquiries. For example, in our meetings with 4 of the integrity officers along the southwest border, we found that 3 defined their role to include active participation in investigations of allegations of misconduct and corruption against OFO employees. At one location we visited, the integrity officer stated that he had created an online social media profile under an assumed name to connect with CBP employees at his port of entry, one of whom was under investigation—an activity that the OFO Program Manager, senior OFO officials, and CBP IA officials acknowledged was beyond the scope of the intended role of the integrity officer position. Further, one integrity officer indicated that his role includes a right to “fully investigate” CBP employees, while another interpreted his role to be limited to conducting data analysis. CBP IA officials stated that integrity officers are not authorized to conduct investigations nor are they trained to do so. Differences in integrity officers’ activities across field locations could be justified given the variances at each port of entry. CBP IA officials expressed concerns, however, that the integrity officers may be overstepping their roles by inserting themselves into ongoing investigations, which could potentially disrupt or jeopardize ongoing investigations because they could unknowingly compromise the independence of an investigation or interview. OFO’s Acting Assistant Commissioner and the integrity officer program manager acknowledged that it would be useful to further clarify integrity officers’ duties to avoid any conflicts with ongoing investigations and ensure that the officers were approaching their duties more consistently. Clear roles and responsibilities for integrity officers developed in consultation with key stakeholders such as CBP IA, and a mechanism that monitors the implementation of those roles and responsibilities, could help OFO ensure that the program is operating effectively and, in particular, in coordination with the appropriate stakeholders like CBP IA. CBP has not developed a comprehensive integrity strategy to encompass all CBP components’ initiatives. Further, CBP has not completed some postcorruption analyses on employees convicted of corruption since October 2004, missing opportunities to gain lessons learned to enhance policies, procedures, and controls. CBP has not completed an integrity strategy that encompasses the activities of CBP components that have integrity initiatives under way, including CBP IA, OFO, and USBP, as called for in the CBP Fiscal Year 2009-2014 Strategic Plan. Specifically, CBP’s Strategic Plan states that it will deploy a comprehensive integrity strategy that integrates prevention, detection, and investigation. Further, a 2008 CBP directive states that CBP IA is responsible for developing and implementing CBP’s comprehensive integrity strategy to prevent, detect, and investigate all threats to the integrity of CBP. We have previously reported that developing effective strategies can help ensure successful implementation of agencywide undertakings where multiple entities are involved, such as CBP integrity-related efforts. Elements of an effective strategy include, among others, (1) identifying the purpose, scope, and particular problems and threats the strategy is directed toward; (2) establishing goals, subordinate objectives and activities, priorities, timelines, and performance measures; (3) defining costs, benefits, and resource and investment needs; and (4) delineating roles and responsibilities. CBP convened the IPCC in 2011 as a forum to discuss integrity-related issues and ideas and to share best practices among the members. IPCC is responsible for facilitating integrity-related operations of individual offices within CBP as a deliberative body. In particular, IPCC was tasked with making recommendations to address the results of an integrity study conducted by the Homeland Security Studies and Analysis Institute. The IPCC is composed of representatives from CBP IA, OFO, USBP, Human Resources Management, and Labor and Employee Relations, among others. See Homeland Security Studies and Analysis Institute, U.S. Customs and Border Protection Workforce Integrity Study. Dec. 15, 2011. committees in selected sectors, including along the southwest border, to establish training and guidance to help BPAs and reinforce concepts such as professional behavior and ethical decision making. OFO established an Integrity Committee to review misconduct and corruption data related to OFO employees, identify potential trends, and develop integrity initiatives to address any concerns. Although CBP IA has a strategic implementation plan for its activities and officials told us that these integrity coordination committees have been useful as forums for sharing information about the components’ respective integrity-related initiatives, CBP has not yet developed and deployed an agencywide integrity strategy. During the course of our review, CBP IA began drafting an integrity strategy for approval by the components and CBP’s senior management, in accordance with CBP’s Fiscal Year 2009-2014 Strategic Plan. CBP IA officials stated that a comprehensive strategy is important because it would help guide CBP integrity efforts and can, in turn, lead to specific objectives and activities, better allocation and management of resources, and clarification of roles and responsibilities. A 2011 workforce integrity study commissioned by CBP recommended that CBP develop a comprehensive integrity strategy and concluded that without such a strategy, there is potential for inconsistent efforts, conflicting roles and responsibilities, and unintended redundancies. However, CBP IA’s Assistant Commissioner stated that, as of September 2012, his office had not developed timelines for completing and implementing the agencywide integrity strategy and has not been able to finalize the draft, in accordance with the Fiscal Year 2009-2014 Strategic Plan. He indicated that that there has been significant cultural resistance among some CBP component entities in acknowledging CBP IA’s authority and responsibility for overseeing the implementation of all CBP integrity- related activities. Program management standards state that successful execution of any program includes developing plans that include a timeline for program deliverables. Without target timelines, it will be difficult for CBP to monitor progress made toward the development and implementation of an agencywide strategy. Further, it is too soon for us to determine if the final strategy will meet the key elements of an effective strategy that encompasses CBP-wide integrity stakeholders’ goals, milestones, performance measures, resource needs, and roles and responsibilities. A strategy that includes these elements could help better position CBP to provide oversight and coordination of integrity initiatives occurring across the agency. CBP has not completed some analyses of some cases in which CBPOs and BPAs were convicted of corruption-related charges. Such analyses could provide CBP with information to better identify corruption or misconduct risks to the workforce or modify existing policies, procedures, and controls to better detect or prevent possible corrupt activities on the part of CBPOs and BPAs. In 2007, OFO directed relevant managers to complete postcorruption analysis reports for each employee convicted for corruption. In 2011, USBP began requiring that these reports be completed after the conviction of any USBP employee for corruption. The reports are to include information such as how the employee committed the corrupt activity, and provide, among other things, recommendations on how USBP and OFO could improve policies, procedures, and controls to prevent or detect similar corruption in the future. For example, according to an OFO Director, several reports stated that the use of personal cell phones helped facilitate and coordinate drug smuggling efforts. As a result of these analyses, OFO implemented a restriction on the use of personal cell phones while on duty. As of October 2012, OFO has completed about 66 percent of the total postcorruption analysis reports on OFO employees convicted since October 2004 (47 of 71 total convictions). OFO’s Incident Management Division Director stated that OFO had not completed the remaining reports because some convictions occurred prior to the 2007 OFO directive or because the convictions had not been published on CBP IA’s internal website—a point that informs OFO when it has 30 days to complete the report. USBP has completed about 4 percent of postcorruption anlaysis reports on USBP employees convicted since October 2004 (2 of 45 total convictions). USBP was instructed to complete postcorruption analysis reports in August 2011, and USBP officials stated that the agency does not have plans to complete analyses for convictions before August 2011 because CBP IA is reviewing these cases as part of a study to analyze behavioral traits among corrupt employees. However, CBP IA’s study does not substitute for postcorruption analysis reports because for this study, CBP IA researchers are exploring the convicted employees’ thinking and behavior to gain insights into the motives behind the betrayal of trust, how the activity originated, and how they carried out the illegal activity. The postcorruption reports, however, may go beyond this type of analysis and also may aim to identify deficiencies in port or sector processes that may have fostered or permitted corruption and to produce recommendations specific to enhancing USBP policies, procedures, or controls. A USBP Deputy Chief acknowledged that completing the remaining reports could be beneficial to understanding any trends or patterns of behavior among BPAs convicted of corruption. In some cases, OFO and USBP officials stated that it may be difficult to complete postcorruption analysis reports for older convictions, as witnesses and other information on the corruption-related activities may no longer be available. Standards for Internal Control in the Federal Government provides guidance on the importance of identifying and analyzing risks, and using that information to make decisions. These standards address various aspects of internal control that should be continuous, built-in components of organizational operations. One internal control standard, risk assessment, calls for identifying and analyzing risks that agencies face from internal and external sources and deciding what actions should be taken to manage these risks. The standards indicate that conditions governing risk continually change and periodic updates are required to ensure that risk information, such as vulnerabilities in the program, remains current and relevant. Information collected through periodic reviews, as well as daily operations, can inform the analysis and assessment of risk. Complete and timely information from postcorruption analysis reports of all convictions could assist USBP and OFO management in obtaining and sharing lessons learned to enhance integrity-related policies, procedures, and controls throughout CBP. Data indicate that the overwhelming majority of CBP employees adhere to the agency’s integrity standards; however, a small minority have been convicted of engaging in corruption due, in part, to the increasing pressure from drug-trafficking and other transnational criminal organizations that are targeting CBPOs and BPAs, particularly along the southwest U.S. border. The Acting Commissioner of CBP testified that no act of corruption within the agency can or will be tolerated and that acts of corruption compromise CBP’s ability to achieve its mission to secure America’s borders against all threats while facilitating and expediting legal travel and trade. Strategic and continuous monitoring of operational vulnerabilities is important given the shifting tactics of drug-trafficking organizations seeking to infiltrate the agency. Therefore, CBP has taken steps to mitigate the risk of misconduct and corruption among incoming CBPOs and BPAs by implementing controls during the preemployment screening process. However, tracking and maintaining data on the results of its screening tools for applicants, a feasibility assessment for potential expansion of polygraph requirements, and a robust quality assurance program for background investigations and periodic reinvestigations that ensures reviews are consistently conducted and documented could better position CBP to mitigate risk of employee corruption. In addition, clear roles and responsibilities for OFO’s integrity officers developed in coordination with appropriate stakeholders such as CBP IA could help CBP ensure that the program is operating effectively. Moreover, establishing a target time frame for completing a comprehensive integrity strategy could help CBP ensure sufficient progress toward its development and implementation. In addition, completed, postcorruption analysis reports of former CBP employees who have been arrested for corruption could better position CBP to implement any lessons learned from these cases. To enhance CBP’s efforts to mitigate the risk of corruption and misconduct among CBPOs and BPAs, we recommend that the CBP commissioner take the following seven actions: develop a mechanism to maintain and track data on the sources of information (e.g., background investigation or polygraph examination admissions) that PSD uses to determine what applicants are not suitable for hire to help CBP IA assess the effectiveness of its applicant screening tools; assess the feasibility of expanding the polygraph program to incumbent CBPOs and BPAs, including the associated costs and benefits, options for how the agency will use the results of the examinations, and the trade-offs associated with testing incumbent officers and agents at various frequencies; conduct quality assurance reviews of CBP IA’s adjudications of background investigations and periodic reinvestigations, as required in PSD’s quality assurance program; establish a process to fully document, as required, any deficiencies identified through PSD’s quality assurance reviews; develop detailed guidance within OFO on the roles and responsibilities for integrity officers, in consultation with appropriate stakeholders such as CBP IA; set target timelines for completing and implementing a comprehensive integrity strategy; and, complete OFO and USBP postcorruption analysis reports for all CBPOs and BPAs who have been convicted of corruption-related activities, to the extent that information is available. We provided a draft of this report to DHS for its review and comment. DHS provided written comments, which are reproduced in full in appendix II. DHS concurred with all seven recommendations and described actions under way or plans to address them. DHS also discussed concerns it had with periodically polygraphing incumbent law enforcement officers. With regard to our first recommendation, DHS concurred and indicated that by March 31, 2013, CBP expects to collect data on the impact of the polygraph examination regarding the outcome of CBP applicant suitability adjudications and undertake steps to ensure data reliability across various CBP personnel security databases. With regard to the second recommendation, while DHS concurred, it reported possible adverse impacts associated with periodically polygraphing incumbent law enforcement officers. Specifically, DHS noted that doing so could adversely affect CBP resources without additional resources to implement the requirement. While we understand DHS’s concerns, we did not recommend that CBP expand its polygraph program to incumbent employees; rather, we recommended that CBP assess the feasibility of expanding polygraph examinations to incumbent CBPOs and BPAs. Thus, concerns such as these could be considered in conducting its feasibility assessment. As we reported, assessing the feasibility of expanding periodic polygraphs early on in its planning efforts could help CBP determine how to best achieve its goal of strengthening integrity-related controls over incumbent CBPOs and BPAs. In addition, DHS noted that expanding the polygraph program to incumbent employees would be contingent on approval from OPM and may encounter resistance from unions representing CBP’s employees who may view it as a potential change to the conditions of their employment. As noted in the report, these are important factors CBP could consider in assessing the feasibility of expanding the polygraph program. With regard to the other five recommendations, DHS concurred and indicated that CBP will work to strengthen its current quality assurance processes and develop a process to document deficiencies identified through quality reviews; develop detailed guidance on the duties, roles, and responsibilities of integrity officers; complete a comprehensive integrity strategy; and develop postcorruption analysis reports for any convictions that do not currently have such reports. DHS estimates that it will complete these steps by July 31, 2013. The actions that DHS has planned or under way should help address the intent of the recommendations. DHS also provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. If you or your staff have any questions about this report, please contact me at (202) 512-8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. To examine data on arrests of and allegations against U.S. Customs and Border Protection (CBP) employees accused of corruption or misconduct issues, we analyzed data on 144 CBP employees arrested or indicted from fiscal year 2005 through fiscal year 2012 for corruption activities. We also analyzed data on allegations of corruption and misconduct against CBP employees from fiscal years 2006 through 2011. For both arrest and allegation data, these are the time periods for which the most complete data were available. In particular, we analyzed variations in both sets of data across CBP components and geographic region. To assess the reliability of these data, we (1) performed electronic data testing and looked for obvious errors in accuracy and completeness, and (2) interviewed agency officials knowledgeable about these data to determine the processes in place to ensure their accuracy. We determined that the data were sufficiently reliable for the purposes of this report. In addition, we interviewed officials from CBP Office of Internal Affairs (IA), Office of Field Operations (OFO), United States Border Patrol (USBP), and CBP’s Human Resource Management, and Labor and Employee Relations, to gain their perspectives on these data on CBP employee corruption and misconduct. To evaluate CBP’s implementation of integrity-related controls to prevent and detect employee misconduct and corruption, we analyzed relevant laws such as the Anti-Corruption Border Act of 2010, which requires, by January 2013, that all CBP officer (CBPO) and U.S. Border Patrol Agent (BPA) applicants receive polygraph examinations before they are hired. We also reviewed documentation on CBP’s preemployment screening practices and their results—including background investigations and polygraph examinations—and relevant data and documentation on the random drug testing program and the periodic reinvestigation process for incumbent CBPOs and BPAs. In particular, we evaluated CBP IA data on the technical results of polygraph examinations from January 2008 through August 2012. To assess the reliability of the technical results of the polygraph data, we (1) performed electronic data testing and looked for obvious errors in accuracy and completeness, and (2) interviewed agency officials knowledgeable about these data to determine the processes in place to ensure their accuracy. We determined that these data were sufficiently reliable for the purposes of this report. In addition, we examined CBP IA’s quality assurance program for its Personnel Security Division (PSD), including interviewing PSD officials who are responsible for deciding whether an applicant or incumbent officer or agent is suitable for hire or continued employment. We also analyzed Human Resource Management’s random drug testing data for fiscal years 2009 through 2011, the time period for which the most complete data were available, and examined the results of those mandated periodic reinvestigations that CBP IA had completed as of September 2012. To assess the reliability of these data, we conducted tests for accuracy and interviewed officials responsible for managing the drug testing and reinvestigation programs and found that the data were sufficiently reliable for the purposes of our report. We compared CBP’s integrity-related controls, as applicable, against recommended controls in Standards for Internal Control in the Federal Government and standard practices from the Project Management Institute. Furthermore, we conducted site visits to four locations along the southwest U.S. border to observe the implementation of various integrity-related controls and obtain perspectives from CBP IA, OFO, and USBP officials at these locations on the implementation of integrity- related controls. We conducted these visits in El Paso, Texas; Laredo, Texas; San Diego, California; and, Tucson, Arizona. We selected these locations on the basis of a variety of factors, including the colocation of CBP IA with OFO offices and USBP sectors along the southwest border and the number of allegations against or arrests of CBP employees for corruption or misconduct. Because we selected a nonprobability sample of sites, the information we obtained from these interviews and visits cannot be generalized to all OFO, USBP, and CBP IA field locations. However, observations obtained from these visits provided us with a greater understanding of CBP’s integrity-related initiatives. To evaluate CBP’s integrity strategy, including how the agency incorporates lessons learned from prior misconduct and corruption cases, we reviewed documentation on integrity initiatives from CBP IA, OFO, and USBP, as well as from the Integrity Integrated Planning and Coordination Committee (IPCC), which CBP convened in 2011 as a forum to discuss integrity-related issues and ideas and to share standard practices among the members. In particular, we analyzed these documents against the requirements set forth in the CBP Fiscal Year 2009-2014 Strategic Plan. In addition, we analyzed all available postcorruption analyses reports, which identify deficiencies that may have enabled CBP employees to engage in corruption-related activities, against OFO and USBP program requirements. We interviewed officials in Washington, D.C., from the Office of Policy and Planning, CBP IA, USBP, OFO, and IPCC, as well as officials during our site visits, regarding CBP’s integrity strategy and the extent to which CBP is using lessons learned from prior corruption and misconduct cases to guide changes in policies and procedures, as appropriate. We conducted this performance audit from December 2011 to December 2012, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Kathryn Bernet, Assistant Director; David Alexander; Nanette J. Barton; Frances Cook; Wendy Dye; David Greyer; Jackson Hufnagle; Wendy Johnson; Otis S. Martin; and Linda Miller made significant contributions to the work. | CBPa component within the Department of Homeland Security is responsible for securing U.S. borders and facilitating legal travel and trade. Drug-trafficking and other transnational criminal organizations are seeking to target CBP employees with bribes to facilitate the illicit transport of drugs, aliens, and other contraband across the southwest U.S. border, in particular. CBP IA is responsible for promoting the integrity of CBPs workforce, programs, and operations; and CBP components implement integrity initiatives. GAO was asked to review CBPs efforts to ensure the integrity of its workforce. This report examines (1) data on arrests of and allegations against CBP employees for corruption or misconduct, (2) CBPs implementation of integrity-related controls, and (3) CBPs strategy for its integrity programs. GAO analyzed arrest and allegation data since fiscal year 2005 and 2006, respectively, reviewed integrity-related policies and procedures, and interviewed CBP officials in headquarters and at four locations along the southwest border selected for geographic location, among other factors. U.S. Customs and Border Protection (CBP) data indicate that arrests of CBP employees for corruption-related activities since fiscal years 2005 account for less than 1 percent of CBPs entire workforce per fiscal year. The majority of arrests of CBP employees were related to misconduct. There were 2,170 reported incidents of arrests for acts of misconduct such as domestic violence or driving under the influence from fiscal year 2005 through fiscal year 2012, and a total of 144 current or former CBP employees were arrested or indicted for corruption-related activities, such as the smuggling of aliens and drugs, of whom 125 have been convicted as of October 2012. Further, the majority of allegations against CBP employees since fiscal year 2006 occurred at locations along the southwest border. CBP officials have stated that they are concerned about the negative impact that these cases have on agencywide integrity. CBP employs screening tools to mitigate the risk of employee corruption and misconduct for both applicants (e.g., background investigations and polygraph examinations) and incumbent CBP officers and Border Patrol agents (e.g., random drug tests and periodic reinvestigations). However, CBPs Office of Internal Affairs (IA) does not have a mechanism to maintain and track data on which of its screening tools (e.g., background investigation or polygraph examination) provided the information used to determine which applicants were not suitable for hire. Maintaining and tracking such data is consistent with internal control standards and could better position CBP IA to gauge the relative effectiveness of its screening tools. CBP IA is also considering requiring periodic polygraphs for incumbent officers and agents; however, it has not yet fully assessed the feasibility of expanding the program. For example, CBP has not yet fully assessed the costs of implementing polygraph examinations on incumbent officers and agents, including costs for additional supervisors and adjudicators, or factors such as the trade-offs associated with testing incumbent officers and agents at various frequencies. A feasibility assessment of program expansion could better position CBP to determine whether and how to best achieve its goal of strengthening integrity-related controls for officers and agents. Further, CBP IA has not consistently conducted monthly quality assurance reviews of its adjudications since 2008, as required by internal policies, to help ensure that adjudicators are following procedures in evaluating the results of the preemployment and periodic background investigations. CBP IA officials stated that they have performed some of the required checks since 2008, but they could not provide data on how many checks were conducted. Without these quality assurance checks, it is difficult for CBP IA to determine the extent to which deficiencies, if any, exist in the adjudication process. CBP does not have an integrity strategy, as called for in its Fiscal Year 2009-2014 Strategic Plan. During the course of our review, CBP IA began drafting a strategy, but CBP IAs Assistant Commissioner stated the agency has not set target timelines for completing and implementing this strategy. Moreover, he stated that there has been significant cultural resistance among some CBP components in acknowledging CBP IAs authority for overseeing all integrity-related activities. Setting target timelines is consistent with program management standards and could help CBP monitor progress made toward the development and implementation of an agencywide strategy. GAO recommends that CBP, among other things, track and maintain data on sources of information used to determine which applicants are unsuitable for hire, assess the feasibility of expanding the polygraph program to incumbent officers and agents, consistently conduct quality assurance reviews, and set timelines for completing and implementing a comprehensive integrity strategy. DHS concurred and reported taking steps to address the recommendations. |
Under the SSI program, SSA pays monthly benefits to individuals who have limited assets and income and are aged, blind, or disabled. These benefits are funded by general tax revenues and based on financial need. SSA has estimated that, during fiscal year 2002, it will make SSI benefits payments totaling approximately $31.5 billion to about 6.4 million individuals. Since 1997, we have designated SSI a high-risk program because of its susceptibility to fraud, waste, and abuse and SSA’s insufficient management oversight. Long-standing concerns regarding program abuses and mismanagement, increasing overpayments, and the inability to recover outstanding SSI debt have led to congressional criticism of SSA’s ability to effectively manage and ensure the program’s integrity. In addition to SSI, SSA administers the OASI and DI programs—together commonly known as Social Security. These are entitlement programs funded from trust funds supported by taxes that workers pay on their wages. OASI provides monthly cash retirement to workers and their dependents or, when workers die, benefits to their survivors. The DI program provides monthly cash benefits to workers and their dependents when workers are disabled. In fiscal year 2002, the OASI and DI programs collectively are expected to pay approximately $447 billion in benefits to about 46 million eligible workers, dependents, and survivors. In 1996, the Personal Responsibility and Work Opportunity Reconciliation Act (PRWORA) prohibited fugitive felons from collecting SSI benefits. Specifically, under the law, an individual is ineligible to receive SSI payments during any month in which he or she is fleeing to avoid prosecution for a crime that is a felony under the laws of the place from which the person flees, fleeing to avoid custody or confinement after conviction for a crime that is a felony under the laws of the place from which the person flees, or violating a condition of probation or parole imposed under federal or state law. PRWORA provides SSA with the authority to suspend SSI payments to fugitive felons and parole and probation violators and to provide information to law enforcement agencies to aid in locating and apprehending these individuals. The act does not provide similar authority for OASI and DI benefits payments. In response to PRWORA, SSA established the fugitive felon program and entered into a partnership with its Office of Inspector General (OIG). SSA’s OIG, with its 63 field divisions and offices, has both program integrity and law enforcement functions and is the primary interface between SSA and law enforcement entities. It can investigate and make arrests for program fraud in collaboration with other law enforcement agencies pursuing SSI recipients engaging in criminal activities. Beyond OIG, numerous other offices also assist in implementing the program. As shown in figure 1, these include SSA’s offices of operations, disability and income security programs, and systems; its regional and field offices; and the FBI’s Information Technology Center in Fort Monmouth, New Jersey. Congress does not appropriate funds to administer the fugitive felon program. Rather, according to SSA officials, each participating SSA office (for example, the office of operations and OIG) and the FBI Information Technology Center use existing funding to support the program. Under the fugitive felon program, SSA relies on warrant information from available federal and state sources to identify ineligible SSI recipients on its rolls. SSA receives federal and state warrant information from several sources, including (1) the FBI’s National Crime Information Center (NCIC); (2) state and local law enforcement agencies; and (3) the U.S. Marshals Service (USMS). SSA receives warrant information from the FBI and USMS under memorandums of understanding and from certain state and local law enforcement agencies under matching agreements that establish conditions for SSA’s use of the warrant information in its matching operations. According to SSA, in calendar year 2001, it received approximately 27 million warrant records from these federal, state, and local law enforcement agencies. Of these, about 2 million records were eligible to be matched against SSI benefits files. Appendix II provides a detailed description of the fugitive felon data-matching process. OIG reports that, since 1996, the fugitive felon program data matching operations (manual and automated) have helped identify about 45,000 fugitives who were paid approximately $82 million in SSI benefits; of these fugitives, approximately 5,000 were subsequently apprehended. Appendix III presents selected cases in which fugitives were apprehended and SSI benefits were suspended as a result of the fugitive felon program. In administering the fugitive felon program, SSA faces several technological and other barriers that create inefficiencies in its processing of fugitive warrant information to identify ineligible SSI recipients. These barriers include a complex, multistep process to obtain and act on fugitive warrant information and a heavily manual approach to accomplishing critical program tasks, such as exchanging and verifying warrant information. In addition, where information systems are used to support the program, many of them are not interoperable or capable of exchanging data electronically. Consequently, key portions of the data-matching process are complicated and time-consuming. Contributing to this situation is that SSA has not designated a single, central point of management accountability to direct the fugitive felon program’s operations. The steps in administering the fugitive felon program—from the point that SSA receives the fugitive warrant information through the suspension of SSI benefits—are complicated and include many back-and-forth exchanges of warrant information among the participating entities. At the time of our review, each of the organizations participating in the program had responsibility for distinct segments of the tasks involved in processing fugitive warrant information received from federal, state, and local law enforcement agencies. However, there was no single entity within SSA that was able to provide a full explanation of the complete chain of activities comprising the data sharing and matching process; as a result, we mapped the process ourselves. We have depicted this overall process in figure 2. As figure 2 illustrates, SSA receives warrant records (usually on a monthly basis) from the FBI’s national repository—NCIC—and from USMS and state and local law enforcement agencies. Using its Enumeration Verification System, SSA matches the warrant records against its master files of Social Security number holders and applications to verify identity information, such as the name, date of birth, and Social Security number of the individual for whom the warrant was issued. Of those records for which identities can be verified, OIG screens the data to eliminate misdemeanors. Then, a second match is conducted against files maintained in SSA’s supplemental security record to determine which of the fugitives are receiving SSI benefits. The results of the second match (addresses of the fugitive benefits recipients) are forwarded to OIG for further processing. OIG and its field offices work with the FBI’s Information Technology Center (ITC) to verify that the felony, probation, or parole violation warrants are active and that the appropriate individuals have been identified. Once verifications are made, ITC provides address information about each SSI recipient (called “leads”) to the appropriate federal, state, or local law enforcement agency so that it can locate and apprehend the individual. After action by the law enforcement agency, OIG refers its findings to the appropriate SSA field offices, which initiate suspension of SSI benefits. In this process, SSA relies on its mainframe computers and systems to match the fugitive warrant information that it receives against the master files of Social Security number holders and applications and the supplemental security record. Most other steps, including sharing the warrant information used in the matching process, are performed manually. For example, SSA does not have a telecommunications capability that would allow it to accept warrant information on line. As a result, the FBI, USMS, and state and local law enforcement agencies must download warrant information from their respective databases and information systems onto various electronic media (such as cartridges, tapes, and CD- ROMS) and send this information to SSA via the U.S. mail or FedEx. Depending on the type of media used, two separate SSA offices—the Office of Central Operations and the Office of Telecommunications and Systems Operations—receive, log, and upload the information onto SSA’s mainframe computer to begin the matching process. Beyond manually sharing warrant information, many of the steps in verifying and referring information contained in the matched records also are performed manually. For example, to accurately identify and locate fugitives, SSA’s field offices, OIG, and the FBI’s ITC exchange numerous forms with law enforcement agencies. However, none of these forms are automated, requiring SSA and ITC staff to manually prepare and fax or mail them to the appropriate entities. In addition, both OIG’s and ITC’s program activities are supported by distinct systems that are not interoperable or compatible, thus further preventing the efficient exchange of information. Specifically, OIG’s allegation and case investigative system and ITC’s automated case support system are used, respectively, to assign case and allegation numbers to matched records and to verify duplicate instances of matched data. However, these systems cannot electronically share the matched records on which both offices must act. Rather, the OIG must download files containing matched records and mail them to ITC. Further, OIG’s system uses Microsoft Word and ITC’s system uses Corel WordPerfect; thus, when ITC receives the files, it must convert them to a usable format to be able to process the warrant information. The various manual interventions in processing fugitive warrants all contribute to a time-consuming operation that is less than optimally efficient. According to program officials, the warrant files that federal and state law enforcement agencies send to SSA sometimes are not formatted in accordance with SSA’s specifications and must be returned to the agencies for correction, delaying action on matching these files. In addition, the electronic media containing warrant records are sometimes lost during the mail delivery process or are misplaced before being entered into SSA’s computers. As a result, this time-sensitive information may go unaccounted for a number of days. SSA had not determined the extent to which warrant records are being lost or mishandled and over what length of time, but program officials acknowledged that the longer it takes to match the warrant information, the greater the opportunity for fugitives to remain unaccounted for and to continue to receive SSI benefits payments. Further, the officials stated that the manual steps involved in verifying fugitives’ identities and obtaining address information for referring leads to law enforcement agencies often slow the overall process of locating and apprehending fugitives. SSA officials were unable to tell us how much time was actually required to complete the processing of fugitive warrants. However, our analysis of data that SSA provided on its existing procedures found that the steps required to fully process a case that did not involve circumstances such as lost or mishandled files, or improperly formatted warrant information received from states reporting warrant information for the first time, could take up to 165 days. This approximate processing time could be increased up to an additional 70 days if the fugitive SSI recipient decides to appeal SSA’s decision to suspend benefits. As figure 3 shows, the approximate processing time includes about 65 days during which SSA and the FBI’s ITC conduct matches and initial verifications of warrant information and refer leads to law enforcement agencies. The approximate time also includes a total of 90 days that is devoted to ensuring that individuals are correctly identified and that their privacy and other rights are protected—60 days that state and local law enforcement agencies are allowed to locate and apprehend fugitives before SSA serves notice that benefits will be suspended and 30 days during which OIG field offices conduct additional verifications prior to sending summaries of actions taken on matched records to SSA field offices for suspension of benefits. Program officials informed us that state and local law enforcement agencies originally were allowed 14 days to locate and apprehend fugitives; however, the number of days allowed was increased to 60 to provide these agencies more time to identify and certify actions taken on the fugitives. According to SSA, one of the difficulties with data matches is that, because fleeing felons often use aliases, law enforcement agencies frequently do not have accurate Social Security numbers or identifying information for them. Moreover, unlike prisoners, fleeing felons are not incarcerated and may not have been convicted of a crime. Consequently, the time devoted to manually verifying the currency of warrant information is vital for ensuring that the correct individuals are identified and apprehended. Program officials added that some manual verifications of warrant information are necessary to help ensure the program’s integrity. However, automating key tasks, such as the capability to accept warrant information from other agencies’ databases on line, could help eliminate much of the time devoted to initially processing and matching warrant information and verifying and referring leads that results from the matched records (now estimated to take about 65 days). At the conclusion of our review, SSA officials told us that they had recently begun considering options to automate manual processes in the field offices. For example, they stated that the agency was considering eliminating many of the field offices’ benefits suspension activities, such as providing due process notices and preparing OIG final reports and, instead, performing these activities in one regional office with the use of computers. Although SSA’s consideration of options for improving the fugitive felon process is a positive step, the agency has not analyzed or mapped its existing fugitive felon data sharing and matching process. Without doing so, SSA lacks critical information needed for targeting processes that are most in need of improvement, setting realistic improvement goals, and ensuring that it selects an appropriate approach for improving its manual operations. The Clinger-Cohen Act of 1996 requires agency heads to analyze, revise, and improve mission-related and administrative processes before making significant investments in supporting information technology. Further, an agency should have an overall business process improvement strategy that provides a means to coordinate and integrate the various reengineering and improvement projects, set priorities, and make appropriate budget decisions. By doing so, an agency can better position itself to maximize the potential of technology to improve performance, rather than simply automating inefficient processes. Although the fugitive felon program is achieving results, it could benefit from increased management accountability. SSA relies on multiple agencies and offices to implement the fugitive felon program. However, there is no unified source of management accountability to provide the consistent oversight and program continuity that is essential to sustaining program success. Consequently, staff assigned to administer the program offered conflicting accounts as to what program tasks were being performed and by whom. For example, program officials identified three different SSA offices—operations, program support, and OIG—as having responsibility for leading the program; yet no officials in these offices could explain the overall data-matching process or had decision-making and oversight responsibility for the other participating entities. In addition, critical data needed to make informed decisions about the program’s operations, such as technological capabilities, program costs and benefits, and resource requirements, were not being captured. For example, none of the participating SSA and FBI offices could state with certainty the amount of time they devoted to processing fugitive warrant records. As discussed earlier, no one office within SSA had mapped the overall fugitive felon data sharing and matching process to comprehensively assess how many days were required from SSA’s receipt of warrant information until SSI benefits payments to fugitive felons were actually suspended. Further, although the program has been in place for 6 years, program officials were unable to provide data on the total costs of the program. In discussing their management of the fugitive felon program, SSA officials acknowledged that the program lacked unified management accountability. An OIG official stated that, while the agency had initially decided that both headquarters offices and OIG would jointly administer the program, these offices had only recently begun considering ways to improve their management of the program. The agency was considering the development of a management board to oversee and address program issues and concerns. However, it had not developed any specific tasks or milestones for this improvement effort. Given the inherent complexity of the fugitive felon program and the many entities involved in its implementation, effective management of operations and data is essential for determining how best to achieve and sustain future program operations and reporting. Having complete and comprehensive warrant information from states is crucial to ensuring that the objectives of the fugitive felon program are achieved. Yet, according to SSA, states currently report warrant information to NCIC on a voluntary basis; therefore, not all outstanding warrants are being included in the FBI’s NCIC database—a prime source of SSA’s matching information. Since May 2000, SSA has been taking steps to obtain more comprehensive state and local information by pursuing data- matching agreements with states that do not report all of their warrant information to NCIC. However, a number of these states have been reluctant to enter into agreements or, once they have, have not always abided by them, largely because of SSA’s and the states’ concerns regarding the lack of information technology and adequate resources to support the program. SSA considers states to be fully reporting warrant information to NCIC if they submit information on all felonies and parole or probation violators. States are considered to be partially reporting warrant information if, for example, they report felonies but not parole and probation violators. As of May 2002 (the latest month for which data were available), SSA had identified 21 states and the District of Columbia as fully reporting warrant information to NCIC and 29 states as partially reporting warrant information. In pursuing data-matching agreements to obtain all of the states’ warrant information, SSA reported as of May 2002 that it had signed agreements with 18 states and was in various stages of negotiating agreements with 5 other states. SSA had been unsuccessful in reaching agreements with 3 states, all of which had declined to enter into the agreements. It had not yet begun negotiating agreements with 6 additional states. Figure 4 reflects the status of SSA’s attempts to obtain data-matching agreements with the states as of May 2002. SSA and state officials cited various factors—often related to their uses of information technology—that had made negotiating data-matching agreements difficult. For example, in explaining their decision to decline an agreement, Iowa officials stated that, because SSA does not have the capability to receive fugitive warrant records on line, state officials would have to reformat, download to electronic media, and mail the warrant information to SSA Headquarters. The officials believed that doing so would not be cost-effective and, thus, elected to continue their practice of submitting paper printouts of warrant information to the SSA OIG field office in Des Moines. In Florida, officials explained that their state had not entered an agreement with SSA and instead was fully reporting warrant information to NCIC because of SSA’s specifications for formatting and downloading the warrant information onto electronic media. They expressed concern that additional resources would be required to perform these formatting tasks and manually provide the warrant information to SSA. Further, SSA and state officials noted that negotiating data-matching agreements had been hindered by the lack of centralized databases or repositories of warrant information in some state and local law enforcement agencies. For example, officials in Oklahoma told us that because that state lacked a central repository, they did not want to enter into a data-matching agreement with SSA. The officials explained that not all of the states’ approximately 700 local law enforcement offices currently report all of their warrant information at the state level and to NCIC. Thus, to meet the intent of a data-matching agreement, each local agency would have to provide their warrant information directly to SSA. However, most local law enforcement agencies within the state do not have central repositories for reporting the information to SSA. Idaho officials added that, in addition to lacking a central repository, they had chosen not to sign a data-matching agreement with SSA because of privacy considerations. Specifically, the officials expressed concerns with the privacy and security implications of submitting sensitive warrant information via the U.S. mail. Even when agreements had been reached, however, SSA had not fully achieved its objective of obtaining comprehensive warrant information from the states. Specifically, at the time of our review, of the 18 states with which SSA had signed agreements, only 9 were actually submitting warrant information to the agency. According to SSA, the remaining 9 states that had signed agreements but had not yet sent warrant information provided similar reasons for not complying with the agreements. These included states’ concerns about the privacy and security of the warrant information and difficulties complying with SSA’s record layout or formatting requirements. In addition, although having agreed to submit warrant information to SSA, 3 states (Kentucky, Rhode Island, and Colorado) later decided instead to report all warrant information to NCIC. At the conclusion of our review, SSA officials acknowledged that the process for obtaining data-matching agreements was difficult and had not yielded the results that they had anticipated. States essentially provide warrant information on a voluntary basis, and the agreements are intended primarily to protect states’ data from unauthorized disclosure and use. Nonetheless, SSA officials believed that, in the absence of a single and complete source of fugitive warrant records from all states, the data- matching agreements were necessary for ensuring that the agency could obtain comprehensive warrant information. We agree that comprehensive warrant information is vital to the success of the fugitive felon program. However, the data-matching agreements have not ensured that SSA will obtain the comprehensive warrant information that it seeks. Under current statutory provisions, fugitives are prohibited from receiving SSI benefits, but can continue to be paid OASI and DI benefits. Specifically, SSA maintains address information on fugitives receiving SSI, OASI, and DI benefits, but can only share information with law enforcement agencies on those fugitives receiving SSI. However, the increasing realization that OASI and DI benefits payments may also finance a potentially dangerous fugitive’s flight from justice has prompted the Congress to pursue implementing provisions to prohibit payments to fugitives in these programs as well. Implementing a nonpayment provision would also permit SSA to share address information on fugitives who receive OASI and DI benefits. In its own consideration of such a measure, OIG projected that doing so could result in substantial savings to the OASI and DI programs. Specifically, in an August 2000 study, OIG estimated that between August 1996 and June 1999, about 17,300 fugitives had been paid at least $108 million in OASI and DI benefits. In August 2001, the office revised its estimates, projecting that OASI and DI benefits amounting to approximately $40 million would be paid to fugitives through October 2001, and in each additional year that legislation was not enacted to prohibit such benefits payments—for a 5- year total payout of approximately $198 million. Should this legislative proposal be enacted, the fugitive felon program’s workload could increase substantially. SSA officials acknowledged that the additional OASI and DI files could significantly increase the program’s data- matching activities. According to an analysis that the OIG performed, the enactment of the legislative proposal would result in three times the current work level of SSI matches. The FBI believed that implementing the legislation could have varying effects on its operations. Specifically, officials in the Criminal Justice Information Services Division, which manages the NCIC database, stated that implementing the provision would have no technological impact on that organization’s ability to provide SSA fugitive warrant information. They anticipated that the database would continue to supply SSA with warrant records received daily from state and local law enforcement agencies. However, FBI and SSA OIG officials stated that the additional matched records for OASI and DI recipients could substantially increase ITC’s workload associated with verifying the accuracy of the matched records and supplying fugitives’ addresses to law enforcement agencies. Further, based on its study, OIG officials, and those of the FBI, believed that ITC’s workforce would have to increase substantially—from the current staff of 7 to about 60—to accommodate the additional workload associated with handling all the leads generated through the matching process. With the potential for workload increases in the fugitive felon program, SSA and ITC officials recognized that additional information systems support would be needed to conduct computer matches of warrant information against the OASI and DI recipient files. However, neither SSA nor the FBI had yet initiated any evaluations to assess the anticipated technological impact on their operations. Such an assessment is critical to helping SSA make an informed decision regarding its ability to ensure that comprehensive and efficient data-matching operations would continue under expanded operations. As discussed in our investment guide, good decisions require good data. Consequently, having solid data on a program’s operations is essential for making informed decisions concerning workload management and the technological solutions needed to sustain efficient and effective performance. As SSA proceeds with implementing the fugitive felon program, having efficient and effectively run operations will be essential to achieving sustained program results. SSA officials have acknowledged inefficiencies in the existing fugitive felon processes and have indicated that they expect to rely more heavily on information technology to help improve the program’s operations and outcomes. Information systems and databases maintained by some of the federal, state, and local law enforcement agencies that currently participate in the fugitive felon program could offer SSA opportunities for more efficiently obtaining warrant information to enhance the program. Much of the foundation for using information technology to improve the fugitive felon processes may already exist among state and local agencies participating in other programs. According to SSA systems officials, SSA currently has a direct, dedicated on-line connection with every state’s department of social services. States use these lines to submit information to SSA covering various programs, such as child support enforcement. Similarly, as part of the prisoner program, some state and local prison facilities send federal prisoner data to SSA on line to aid in suspending SSI, OASI, and DI benefits to incarcerated inmates. In discussing the exchange of states’ fugitive warrant information, SSA officials told us that they had not evaluated how on-line connections with state and local agencies could be used to receive information supporting the fugitive felon program. They indicated that implementing an on-line connection to receive fugitive warrant information from each state law enforcement agency would require each state to have access to a secure (encrypted), dedicated telecommunications line with SSA. They believed that data compatibility and privacy issues would also need to be addressed. Nonetheless, at the conclusion of our review, an SSA official told us that the agency had reached agreement with one state—Connecticut—to exchange fugitive felon data via electronic file transfer. According to the official, Connecticut is preparing to submit data via a Connect:Direct electronic file transfer method, in which data will be encrypted and sent from one mainframe computer to another over dedicated lines. An alternative to each state sending data would be increased reliance on the NCIC database, which could provide a comprehensive and readily accessible means of attaining outstanding warrant records from the FBI, USMS, and from the states. According to FBI data, NCIC’s technical infrastructure includes high-level security controls and validation and confirmation procedures for all warrant information exchanged with the database. In addition, it is designed to interact in an on-line, real-time capacity with other information systems and databases, including those of USMS and all 50 states. For example, states transmit warrant data to NCIC via state criminal justice systems that are linked to the FBI Criminal Justice Information Services’ network. As discussed earlier, all states transmit all or some portion of their warrant information on line to the NCIC database each month. Within the Department of Justice, USMS relies on on-line connections to transmit fugitive warrant information to NCIC. Like many state and local law enforcement agencies, USMS transmits to NCIC the same warrant information that it sends to SSA via U.S. mail. On the other hand, SSA officials stated that the Bureau of Prison’s database of incarcerated inmates, which supports SSA’s prisoner program, could not be used to effectively support the fugitive felon program, because that database does not maintain information on the status of fugitive felons. Both SSA and its OIG officials believed that having a single source of warrant information would help make the data-matching process less laborious and eliminate processing inefficiencies. Accordingly, in November 2001, OIG recommended to Congress the need for a national warrant database. In addition, at the conclusion of our review, SSA officials told us they that they viewed the NCIC database as a potential single source of warrant information to support the fugitive felon program. The officials believed that receiving USMS’s and state and local law enforcement agencies’ warrant information on line via NCIC could potentially eliminate much of the duplicate warrant information that now contributes to the program’s inefficiencies. For example, according to federal, state, and local law enforcement officials, USMS and all states currently transmit all or some of their warrant information to the FBI’s NCIC. Thus, when the FBI downloads warrant information from this database to mail to SSA, the information duplicates some of that which federal, state, and local law enforcement agencies also send to SSA. Fugitive felon program officials reported that, in calendar year 2001, SSA received approximately 60,000 duplicate warrant records (approximately 5,000 warrant records per month) as a result of these dual exchanges. SSA officials noted, however, that achieving a single source of fugitive warrant information would require that SSA have the capability to accept data from NCIC on line. At the conclusion of our review, SSA officials stated that the agency had not explored using an on-line connection to NCIC to enhance the sharing of fugitive warrant information. In addition, they stated that, for NCIC to be effective as a single source of comprehensive warrant information, state agencies would have to be willing to report that portion of warrant information to NCIC that SSA currently must obtain from them under data-matching agreements. However, according to FBI officials, there is no statute or regulation requiring the states to fully report warrant information to NCIC; rather, states report information to this database voluntarily. In administering the fugitive felon program, SSA faces significant technological and other barriers to achieving and sustaining efficient and effective program operations and, ultimately, helping SSI overcome its high-risk status. While the program has helped prevent SSI benefits payments to fugitives, its complex and manually intensive processes have resulted in operational inefficiencies that could hinder the program’s long- term success. Further, difficulties in negotiating data-matching agreements with the states have hindered SSA’s efforts to obtain comprehensive warrant information needed to fulfill program objectives. In the absence of essential management accountability, SSA lacks critical data needed to make informed decisions about the program’s processes and activities, as well as existing and future plans for technology supporting the program. Overcoming these inefficiencies and limitations will be critical to ensuring that the fugitive felon program is organized and implemented to achieve the greatest possible results and that SSA is effectively positioned to fulfill its potentially broader role in preventing OASI and DI benefits payments to fugitives. SSA officials recognized that increased program efficiency and outcomes could result from more substantial uses of information technology to perform key data sharing and verification functions and to streamline data- matching operations. Further, given the potential increase in SSA’s workload that could result from implementing an OASI and DI nonpayment provision, having the necessary information technology to support its operations will be even more critical. SSA already has a proven capability to share data on line with federal, state, and local agencies in support of other programs. However, SSA has taken few steps toward examining its current data-matching operations and approaches to obtaining warrant information or exploring how best to use technology to enhance the overall fugitive felon process. To improve the fugitive felon program’s operational efficiency and ensure sustained, long-term success in identifying fugitive SSI beneficiaries, we recommend that the Commissioner of Social Security designate a program management office and program manager to direct, monitor, and control the program’s activities and progress. In addition, we recommend that the commissioner direct the program management office and manager to conduct a detailed assessment of the fugitive felon program’s current operations and performance, including a complete analysis of the organizations, processes, information flows, and time frames required to administer the program, a full accounting of the program’s costs, estimated and actual program benefits, and current workload requirements; identify and prioritize, based on its assessment, those fugitive felon processes that need improvement and develop a strategy for resolving technological and administrative barriers preventing their efficient operation; continue to examine and propose options for using technology to automate the currently manual functions involved in exchanging fugitive warrant information with federal, state, and local law enforcement agencies and in completing the verifications and referral of this information, including assessing alternatives to using data-matching agreements to obtain fugitive warrant information, and determining whether on-line connections with state and local law enforcement agencies and/or direct telecommunications connections with the FBI’s NCIC database could offer viable and more efficient means of sharing warrant information; and assess the anticipated technological impact on fugitive felon operations from the implementation of provisions prohibiting OASI and DI benefits payments to fugitives, including identifying the additional information systems support that would be needed to conduct and process leads resulting from computer matches of warrant information against these benefits recipients’ files. We received written comments on a draft of this report from the Commissioner of Social Security (see app. IV) and from the Director, Audit Liaison Office, Justice Management Division, Department of Justice (see app. V). The Commissioner of Social Security expressed disappointment with our report and generally disagreed with our recommendations. The Department of Justice provided technical comments, which we have incorporated, as appropriate. Regarding the commissioner’s statement expressing disappointment in our report, we believe our report provides a fair assessment of the efficiency of the fugitive felon program. We identify those areas in which improvements can be made that can benefit SSA’s future efforts at streamlining the program’s inefficient processes, improving performance and operations, and applying technology, where appropriate. In SSA’s comments, the commissioner also stated that the report implied that neither SSA nor the OIG had a vision for the fugitive felon program and did not mention that SSA and OIG had embraced the program without start- up funding or additional resources. We recognize the valuable role that SSA plays in implementing the fugitive felon program to prevent benefits payments to ineligible SSI recipients. In pointing out technological and other barriers to the program’s operations, our intent was not to imply that SSA and other participating components lack a vision for the program. Rather, given the program’s complexity and multiple entities involved in its administration, we believe it is important to highlight critical conditions and operational inefficiencies necessitating SSA’s continual attention in order to ensure sustained program success. Also, our report does recognize that funding has not been appropriated to administer the program. SSA, as a steward of the program, has a responsibility to ensure that it consistently carries out all aspects of the fugitive felon data-matching operations in an efficient manner. Regarding our recommendation to designate a program management office and program manager to direct, monitor, and control the program’s activities and progress, SSA disagreed that an agency-wide program manager was necessary. SSA stated that managers within its office of operations and OIG are responsible for the program and that all involved offices are aware of the overall process and individual office responsibilities. Further, SSA stated that the Inspector General Act does not allow its OIG to take direction from or participate in administrative decisions that appropriately belong to SSA. In addition, regarding our statement that no program official could explain the overall data sharing and matching process, SSA disagreed, stating that OIG and SSA officials are able to explain the process in its entirety. SSA also stated that, to ensure all involved offices are aware of the overall process and individual office responsibilities, it had released a detailed process description and provided a copy to us. Further, SSA added that because its officials had chosen not to answer questions pertaining to other components’ work during our review, we had mistakenly inferred that no one within the agency could explain the overall process. We recognize that the fugitive felon program is a joint effort and that there are responsible and knowledgeable managers within each of the participating components involved in administering the program. Our recommendation is intended to ensure a unified management oversight capability for the fugitive felon program that does not currently exist. It is not intended to prescribe the exact nature or form of that management oversight capability. With respect to comments regarding possible limitations on joint efforts as a result of the Inspector General Act, the Act does not prohibit coordination of joint action between the OIG and the head of the establishment involved to ensure efficiency of operation and to avoid duplication of effort, nor do we believe that such coordination would affect OIG’s personal and professional independence. In this respect, as we noted in our report, an OIG official has commented on the need for improved coordination and management of the fugitive felon program. That official also noted that OIG had already begun to work with SSA’s Office of Operations to improve management of the program. Further, during our review, it was evident that management and staff in each component could explain the distinct segments of tasks that they were responsible for accomplishing; however, we could not identify any officials within these organizations who had a clear perspective of overall program performance and operations. As our report noted, staff within the SSA and OIG offices provided conflicting accounts of the fugitive felon data-matching process. While we acknowledge that SSA revised its policy instructions in April 2002, outlining involved offices’ roles and responsibilities, these instructions do not address SSA’s overall fugitive felon processes. In addition, we were unable to identify any aggregate tracking data to assess the program’s overall cost and performance. Given the multiple agencies and offices involved in administering this complex program, we continue to believe that having a unified source of accountability and authority for the program is essential to effectively and consistently oversee its progress and ensure that informed decisions are being made about its implementation. In discussing our findings on May 10, 2002, SSA and OIG officials agreed that the program lacked uniform management accountability and stated that they had just recently begun considering the development of a management board to oversee and address program management issues and concerns. SSA also disagreed with our recommendation that called for it to conduct a detailed assessment of the fugitive felon program’s current operations and performance, including a full accounting of the program’s costs and benefits and workload requirements. SSA stated that its analysis of the program’s operations is an ongoing process and that enhancements are made when deemed necessary. SSA further stated that it had completed many of the tasks cited in our recommendation prior to starting the matching process in calendar year 1999, and that OIG has regularly reported its performance in the program. We agree that ongoing monitoring and analysis of the program’s operations is essential for ensuring that management is informed of the program’s cost and progress and to assess risks to overall performance. However, during our review, SSA could not demonstrate that it had made an aggregate assessment of the program’s current operations and performance, including an awareness of the processes, information flows, and time involved in administering the program, as well as a full accounting of its costs and requirements. Such information is vital for making informed decisions about the program’s progress and for determining where process improvements are needed and how best to achieve them. The fugitive felon program has been in place since 1996 (with computerized matching since 1999), giving SSA sufficient opportunities to perform these necessary and critical assessments. Further, while SSA stated that OIG has regularly reported its performance in the fugitive felon program, it is important that SSA conduct its own program assessment that includes all participating components, which ensures that the program can be consistently and comprehensively controlled and managed. Regarding the need for an assessment, SSA stated that it had completed a cost-benefit analysis of the fugitive felon program in January 2001. Our review of the two-page summary found that it lacked substantial information about the program’s overall costs and benefits. An operations official who provided the document told us that the summary had constituted only a rough estimate, rather than an accurate reflection, of the program’s costs and benefits, and that it had been developed only for the purpose of renewing SSA’s computer-matching program. The official added that the development of actual cost and benefits data for the fugitive felon program would require significantly more time than had been invested in preparing the current summary. SSA believed that our recommendation to identify and prioritize those fugitive felon processes that need improvement and develop a strategy for resolving technological and administrative barriers preventing their efficient operation is unnecessary, stating that it provides for these actions during normal operations. The agency added that it has a number of efforts under way to automate some of the fugitive felon processes. However, SSA stated that there are some manual processes in the program that contribute to minor interruptions to the fugitive felon process. While SSA stated that it has undertaken efforts to automate some fugitive felon processes, it needs to develop a strategy for resolving the technological and administrative barriers affecting the program’s operations. As our report notes, an overall business process improvement strategy will better position SSA to prioritize and integrate its various reengineering and improvement projects and, thus, maximize the potential of technology to improve the program’s performance. Further, at the conclusion of our review, SSA provided us with documentation outlining discussions to automate field office functions. However, the information provided did not include enough detail on the initiative that it said was being undertaken; therefore, we could not comment on these developments. We recognize that there are many steps within the fugitive felon process that must be completed manually. While we agree that some of these manual processes are necessary, our report notes that technology may enhance the program. Given our assessment of the length of the process (approximately 165 days), we continue to believe that SSA needs to perform a complete analysis of the fugitive felon program to identify areas for improvement, as well as areas where technology can be used to support more efficient operations. SSA found potential merit in our recommendation that it examine and propose options for using technology to automate the currently manual functions involved in exchanging fugitive warrant information with federal, state, and local law enforcement agencies and in determining the most efficient means of sharing warrant information. Although our recommendation included determining whether direct telecommunication connections with the FBI’s NCIC could offer a viable solution, SSA believed that the creation of a single national warrant database would be a better solution to efficiently sharing warrant information. SSA stated that its OIG had testified before the Congress on the benefits that would be derived if such a database were established. SSA added that NCIC would be effective as a single source of comprehensive warrant information only if entry of states’ warrant information became mandatory. We acknowledge in our report that the states enter warrant information into NCIC voluntarily and agree that this could be an impediment to achieving a comprehensive information repository. Nonetheless, achieving an optimal solution will in large measure depend on SSA examining the strengths and limitations of all of the potential alternatives to sharing warrant information, including NCIC. Accordingly, this recommendation remains in our report. However, we have also incorporated language reflecting SSA’s views regarding a national warrant database. Also in its response to this recommendation and in additional comments, SSA noted several instances where it believed we had made incorrect assumptions regarding the fugitive felon process. SSA disagreed with our assertion that it cannot determine the amount of time that is actually required to complete the processing of fugitive warrants and stated that it was able to track the number of days it takes for each individual subject (fugitive) to be processed. In addition, SSA stated that our analysis of the fugitive felon processing timeline had incorrectly considered start-up/test file processing times that can be associated with states’ first submissions of warrant files, thus accounting for our approximation of 165 days. We acknowledge that SSA has been able to track fugitive warrant information on a subject-by-subject basis; however, our review did not find that SSA had performed any aggregate tracking of the time required to process fugitive warrants—data that would be helpful to SSA in gauging the program’s performance. Also, as our report noted, our analysis was limited to case processing data that did not involve lost, mishandled, or improperly formatted data. However, we have amended our report to specifically reflect that we also did not include trial and error times for states reporting warrant information for the first time. In addition, SSA interpreted our discussion of states’ perspectives regarding fugitive felon data formatting and transmission requirements as an implication that the requirements were unduly restrictive. However, we recognize the importance of prescribed standards for ensuring consistent reporting of warrant information and other computerized data. In this regard, our report aims only to highlight some of the circumstances that currently prevent SSA from receiving comprehensive warrant information from state and local law enforcement agencies. Further, SSA stated that our report had included erroneous information about the agency’s on-line connections with states’ departments of social service and in describing its data exchanges with the Bureau of Prisons. SSA stated that the agency has dedicated lines connecting to each state and that these lines are used to exchange batch files, and that real-time transfers are not occurring as indicated in our report. Based on documentation that SSA’s Office of Systems provided, we determined that SSA has direct, dedicated on-line connections with every state’s department of social services. However, we do not imply in our report that on-line connections mean real-time connections. We have added language to clarify that SSA’s existing on-line connections with state and local agencies primarily involve batch, rather than real-time, transfers of data. Finally, regarding our recommendation that SSA assess the anticipated technological impact on fugitive felon operations from the implementation of provisions prohibiting OASI and DI benefits payments to fugitives, SSA stated that it and the OIG have already completed this task. In particular, SSA stated that a joint SSA/OIG analysis of state and federal warrant files had been started in January 2001 to determine what impact OASI and DI legislation would have on the program. According to SSA, this analysis had determined that the legislation would affect staff resources, but would not affect its need for technology. Further, SSA stated that this information had been shared with us during our review. While conducting our review, OIG officials did inform us that they had performed an analysis to assess the impact of the OASI and DI legislation on fugitive felon program operations; however, by the conclusion of our review, the officials had not met our request for documentation supporting this analysis. OIG officials subsequently provided the analysis to us while reviewing and commenting on our draft report. We agreed to review the analysis and make revisions to our report, as necessary. Our review of the analysis found that OIG’s main objective had been to determine the increase in the current fugitive felon program workload. However, the analysis did not include any discussion of the technological implications resulting from the OASI and DI legislation. Further, SSA commented that, in June 2001, its Office of Systems had estimated that adding OASI and DI to the fugitive felon program would require about 8 work years, but that the agency did not envision any major new or unique information technology expenditures. In discussing the impact of the legislation during our review, SSA did not inform us of the Office of Systems’ projection or of documentation supporting this evaluation. Thus, while we have modified our report to reflect that OIG had performed analysis that assessed the workload impact resulting from the OASI and DI legislation, we continue to believe that additional assessment is necessary to determine whether and what information systems support would be required to meet the broader mission. Beyond these comments, SSA offered clarifications to table 1, which listed the organizations tasked with implementing the fugitive felon program and to the definition of “high misdemeanors” discussed in footnote 6 to the report. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Commissioner of Social Security and to the Director, Office of Management and Budget. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Should you or your office have any questions concerning this report, please contact me at (202) 512-6253, or Valerie Melvin, Assistant Director, at (202) 512-6304. We can also be reached by E-mail at [email protected] and [email protected], respectively. Individuals making key contributions to this report were Nabajyoti Barkakati, Mary J. Dorsey, Sophia Harrison, David L. McClure, Valerie C. Melvin, Tammi Nguyen, Henry Sutanto, and Eric Trout. Our objectives were to examine the technological aspects of the fugitive felon program in order to (1) identify technological barriers restricting (a) data matching between the Social Security Administration’s (SSA) and the Federal Bureau of Investigation’s (FBI) databases and (b) ongoing efforts by SSA to obtain data-matching agreements with state and local law enforcement agencies; (2) assess the technological impact on SSA and the FBI should Old Age and Survivors Insurance (OASI) and Disability Insurance (DI) benefits be included in legislation restricting payments to fugitive felons; and (3) determine whether other databases, such as those maintained by the Department of Justice’s Bureau of Prisons and the U.S. Marshals Service, can provide additional support to the fugitive felon program. To understand the fugitive felon data-matching process, we obtained and analyzed various documentation maintained by SSA, the FBI, the Bureau of Prisons and the U.S. Marshals Service. These documents described data sharing and matching policies, operational and security procedures, and the technical infrastructure supporting the fugitive felon program. We complemented our understanding of the data-matching process by arranging a demonstration with SearchSoftwareAmerica, an industry leader with expertise in data-matching software, to understand how data- matching software works in a database environment. At the time of our review, SSA did not have documentation showing the flow of warrant information through the fugitive felon data-matching process, requiring that we map the process ourselves. To accomplish this, we relied on the results of our document analyses and used business process flow software to construct a graphical presentation of the fugitive felon program’s process flow. We provided copies of the completed business process flowchart to SSA Headquarters offices and its Office of Inspector General (OIG) to verify the accuracy of our process depiction and incorporated changes based on their review and comments. In addition, to further confirm the process, we interviewed agency officials in all of the offices involved in administering the fugitive felon program. These included SSA’s OIG and office of operations, SSA and OIG field offices, and the FBI Information Technology Center (ITC) in Fort Monmouth, New Jersey. Also, because SSA had not performed an analysis to determine how many days it took to process warrant information, we determined the approximate number of days involved in the process from the receipt of warrant information from federal, state, and local law enforcement agencies until the suspension of fugitives’ SSI benefits. We derived the number of days by performing a detailed analysis of documentation obtained from various SSA offices. For example, we reviewed completed samples of incoming data included on log sheets from both the Office of Central Operations and Office of Telecommunications and Systems Operations to calculate the approximate number of days it took these offices to process the warrant data from its receipt until they forwarded it to OIG. We also interviewed officials in SSA Headquarters, its OIG, and the FBI ITC regarding the number of days involved in processing fugitive warrants. We shared the results of our analysis with appropriate SSA officials to confirm the validity of our processing timeline estimate. To identify technological barriers restricting data matching between SSA’s and the FBI’s databases, we relied on our detailed analysis of SSA’s fugitive felon process and assessed information describing its supporting technical infrastructure. We also analyzed documentation describing the FBI’s repository of fugitives and other criminals—the National Crime Information Center (NCIC)—along with the agency’s approach to providing warrant information to SSA. To support our analysis, we applied various guidance, including Office of Management and Budget Circular A-130, Appendix I: Federal Agency Responsibilities for Maintaining Records about Individuals and Appendix II, Security of Federal Automated Information Resources; and National Institute of Standards and Technology computer security guidance. Regarding SSA’s ongoing efforts to obtain data-matching agreements with state and local law enforcement agencies, we applied the Computer Matching Privacy and Protection Act (CMPPA) of 1988 (P. L. 100-503), amending the Privacy Act (5 U. S. C. 552a); data-matching agreements fall under the provisions of the Act, which protects unauthorized disclosures of computerized data through data matching. In addition, we applied the Privacy Act of 1974 (P. L. 93-579), which stipulates provisions for protecting individuals from unauthorized disclosure of non-computerized information. We also applied knowledge gained through our detailed analysis of the fugitive felon process, as well as SSA’s model data-matching agreements, reports documenting the status of negotiations between SSA and state and local law enforcement agencies, and other policy and procedural documentation. We conducted site visits and telephone conferences with 17 randomly selected state and local law enforcement agencies—Alabama, Arkansas, California, Connecticut, Delaware, Florida, Indiana, Iowa, Kansas, Maine, Massachusetts, Montana, New Jersey, Ohio, Oklahoma, Oregon, and Montgomery County, Pennsylvania—to determine their involvement in the fugitive felon program, identify any technological barriers prohibiting their ability to effectively and efficiently share data with SSA and other federal agencies, and assess issues and concerns affecting their efforts to negotiate data-matching agreements with SSA. To assess the technological impact on SSA and the FBI should legislation be enacted to prohibit fugitives from receiving OASI and DI benefits payments, we analyzed applicable laws: the Personal Responsibility and Work Opportunity Reconciliation Act of 1996 (P. L. 104-193), amending Title XVI of the Social Security Act, as well as OASI and DI legislation proposed by the House of Representatives and U.S. Senate. We also assessed SSA OIG reports and testimony that highlighted the Inspector General’s position regarding proposed legislation prohibiting OASI and DI benefits payments to fugitive felons and determined SSA’s decisions and actions regarding the potential impact of such proposed legislation. In addition, as part of our analysis, we considered our recent correspondence to the House Committee on Way and Means, which reported on whether SSA has the authority to deny OASI and DI benefits to fugitive felons and provide law enforcement agencies with the current addresses and Social Security numbers of OASI and DI beneficiaries who are fugitive felons. Finally, to determine whether other databases, such as those maintained by the Department of Justice’s Bureau of Prisons and U.S. Marshals Service could provide additional support to the fugitive felon program, we obtained and analyzed documentation describing the agreements between SSA’s OIG and these agencies. In addition, we analyzed systems documentation pertaining to the Bureau of Prisons and U.S. Marshals Service’s databases, as well as the FBI’s NCIC and state and local law enforcement agencies’ databases. This documentation included data standards, reporting requirements (memoranda of understanding and data-matching agreements), policies, and procedures. We also interviewed pertinent management and staff of SSA’s Headquarters, field offices, and OIG; the FBI Criminal Justice Information Services Division; the U.S. Marshals Service; and the Bureau of Prisons to gain an understanding of how their databases could be used to support the fugitive felon program. We conducted our review from August 2001 through May 2002, in accordance with generally accepted government auditing standards. Under the fugitive felon program, the FBI, U.S. Marshals Service (USMS), and state and local law enforcement agencies download warrant information from their respective databases onto electronic media, such as cartridges, diskettes, and CD-ROMs, and send it to SSA via the U.S. mail or FedEx. Warrant information contained on diskettes, paper, and CD-ROMs is sent to SSA’s Office of Central Operations (any warrant information received on paper is keyed to diskette), whereas warrant information contained on tapes, cartridges, or electronic files is sent to SSA’s Office of Telecommunications and Systems Operations. Upon receipt, staff in these two offices upload the warrant information onto SSA’s mainframe computer, located at its National Computer Center (NCC), to begin the data-matching process. As part of the data-matching operations, the NCC staff makes backup copies of the warrant information and processes the data in overnight batches, using SSA’s Enumeration Verification System (EVS) to verify fugitives’ names, Social Security numbers, sex, and birth dates. The NCC staff then notifies SSA’s OIG that the data files are in production. The OIG is responsible for ensuring that the data files include only individuals charged with felony and parole or probation violations; it deletes any files naming individuals charged with misdemeanors. Following OIG’s review, NCC staff processes the data files against the supplemental security record to identify fugitives receiving SSI benefits payments. The records of fugitives identified—called matched records— are then returned to the OIG for further processing. OIG enters the files containing the matched records into its allegation and case investigative system and assigns case numbers to them. Case numbers are assigned based on whether the records represent “exact” matches or “good” matches. OIG then sends the records containing exact matches (on electronic media via FedEx) to the FBI’s ITC in Fort Monmouth, New Jersey and to USMS for additional processing. OIG also notifies its field offices via E-mail that exact and good matches have been entered into its allegation and case investigative system. Upon receiving the exact matches, ITC staff then verifies the address and status of each individual named in the NCIC matched warrant records to determine whether the warrants are still active, using a personal computer to access and query fugitives’ records that are maintained in the NCIC database. ITC staff does not query NCIC to determine whether states’ matched warrant records are still active, but rather processes and mails the records to the applicable states for their verification. ITC staff also obtains from the NCIC database address information on the law enforcement agencies that originally issued the arrest warrants on the individuals named in the warrants and manually type each originating law enforcement agency’s address onto a cover letter. The staff then uses the address information to mail the “leads,” together with the cover letter, law enforcement referral form (OI-5B), and the law enforcement certification form (OI-5C) to the originating law enforcement agencies for their use in locating and apprehending the fugitives. According to OIG, ITC generally requires approximately 30 days to process the matched records. State and local law enforcement agencies use information contained in the “leads” to locate the fugitive felons and then return certification forms to ITC indicating the action they have taken on the warrant. Before acting to suspend SSI benefits, SSA generally allows the originating law enforcement agencies 60 days to apprehend a fugitive based on the leads provided. By allowing this “sunset” phase, SSA avoids letting fugitives know that their status and whereabouts have been revealed before law enforcement authorities can arrest them. When ITC receives the certification forms from the law enforcement agencies indicating the status of the warrants, it forwards the forms to the appropriate OIG field offices. OIG agents have 30 days from the time that the forms are returned to them to work the cases (a case consists of 50 subjects or felons) or perform additional verifications, enter the information from these forms into the allegation and case investigative system, complete summary and benefits suspension forms, and mail the forms to SSA field offices. If law enforcement agencies do not return certification forms indicating the status of the arrest warrant to ITC within 60 days, OIG agents follow up with the law enforcement agencies either by letter or telephone to determine whether the warrant is still active. OIG agents also perform additional identification activities for good matches and send these matches to SSA field offices, where staff query the supplemental security record for verification. If records cannot be verified using the supplemental security record, OIG contacts the law enforcement agencies for verification. If, after contacting law enforcement agencies, warrants still cannot be verified, records are either destroyed or mailed back to the originating law enforcement agencies and a note is attached to records contained in the allegation and case investigative system. Once warrant records are verified and are determined to be still active, OIG agents refer them to the appropriate SSA field offices, where action is taken to suspend the fugitive’s SSI benefits, calculate the amount of overpayment, and update the SSI files. SSA officials told us that the process to suspend SSI benefits payments takes approximately 10 days. Based on our analysis of data that SSA provided about its process, we determined that from the date on which SSA first receives warrant data from the law enforcement agencies to when it identifies fugitives who receive SSI benefits, locates and apprehends them, and then suspends SSI benefits, the process can take up to 165 days. This approximate processing time includes 35 days for SSA systems, operations, and OIG staff to process the matches, 30 days for ITC to verify and batch process warrants, 60 days for state and local law enforcements agencies to locate and apprehend fugitives before SSA serves notice of benefits suspension, 30 days for OIG field offices to act on information received from the law enforcement agencies, and 10 days for processing the suspension of benefits. Following the suspension of benefits, fugitive SSI recipients are given due process. That is, fugitive SSI recipients have 10 days to contact SSA for a continuance of benefits and 60 days to appeal the suspension. If the fugitive loses the appeal, SSA will suspend the SSI benefits and again update the supplemental security record. SSA’s NCC runs master tapes once a month and submits them to the Department of the Treasury, informing it of any updated information. Treasury discontinues issuing checks to the identified fugitive felons. SSA reports that, since its inception in August 1996, the fugitive felon program has been instrumental in helping identify approximately 45,000 fugitives who improperly collected at least $82 million in Supplemental Security Income (SSI) benefits. In addition, SSA reports that, as a result of sharing fugitive warrant information, officer and public safety throughout the United States has increased. According to SSA’s OIG, among those fugitives who have been identified as receiving SSI benefits, more than 5,000 have been apprehended since the fugitive felon program began. The following cases highlight examples of how the fugitive felon program has contributed to identifying and apprehending fugitives and preventing improper payments of SSI benefits: On February 8, 2001, authorities arrested two fugitives as a result of computer matching between SSA’s OIG and the FBI’s NCIC. Agents from OIG’s New York field division, state troopers from New Jersey, and deputies from the Essex County, New York sheriff’s office, arrested one fugitive wanted on arson charges and a second wanted on charges of producing and distributing a dangerous controlled substance. Both fugitives were remanded to the custody of the Essex County jail, and these cases resulted in the suspension of the fugitives’ SSI benefits. In New York, the field division of SSA’s OIG used leads from matched fugitive warrant records to identify a fugitive wanted by the Union County, New Jersey sheriff’s office on a burglary charge. This fugitive, arrested in June 2001, with the assistance of an FBI agent, had 13 prior arrests and 5 prior convictions, including one for homicide. This case resulted in SSI benefits suspension. Under the direction of the U.S. Attorney’s Office for the Eastern District of Michigan, agents from the OIG Detroit office participated in an operation that focused on locating and arresting 400 adult and juvenile chronic violent offenders. The 3-day operation resulted in the arrest of 82 individuals—67 of whom were receiving SSI benefits. The apprehended individuals were wanted for offenses ranging from criminal sexual conduct to armed robbery and assault with intent to do bodily harm. In California on December 7, 2000, the Operation Pretenders Task Force, (composed of agents from the SSA OIG, U.S. Immigration and Naturalization Service, and California state parole), assisted by California’s Department of Health Services and Department of Motor Vehicles, arrested a registered child sex offender for a parole violation. The fugitive had eluded officials for approximately 5 years by assuming the identity of his deceased brother and had applied for and received SSI benefits under the assumed identity. On January 31, 2001, a grand jury indicted him on two counts of false statements for SSI benefits, two counts of fraudulent use of a Social Security number, and three counts of identity theft. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading. | The Supplemental Security Income (SSI) program, administered by the Social Security Administration (SSA), is the largest cash assistance program in the United States. For fiscal year 2002, SSA expects to pay SSI benefits totaling $31.5 billion to more than six million financially needy individuals who are aged, blind, or disabled. Since becoming operational in August 1996, the fugitive felon program has provided a valuable service by helping SSA to identify and prevent payments to ineligible SSI benefits recipients and helping law enforcement agencies to locate and and apprehend fugitive felons. Nevertheless, several technological and other barriers are contributing to inefficiencies in the program's operations. Certain information systems that SSA and the Federal Bureau of Investigation (FBI) use in processing matched data are not interoperable or compatible, thus also hindering the efficient exchange of warrant information. Contributing to these inefficiencies is that no one office within SSA has been designated to oversee and manage the overall performance of the fugitive felon program. Consequently, no program officials could explain the overall data sharing and matching process. Largely because of the SSA's and states' limited uses of information technology to support the fugitive felon program, many state law enforcement agencies have been reluctant to enter into data-matching agreements with SSA. According to SSA and law enforcement officials, among the factors that made some states reluctant to enter into the agreements were that some states did not maintain central repositories of warrant information and SSA's guidance for formatting, downloading, and manually transmitting the information created additional resource requirements that some states were unable to meet. The enactment of legislation prohibiting OASI and DI payments to fugitive felons could increase SSA's recovery of improperly paid benefits and prevent more potentially dangerous fugitives from fleeing justice. However, the additional matches of warrant records against OASI and DI recipient files could substantially increase the data processing workloads of both SSA and the FBI's Information Technology Center. SSA may be able to improve the fugitive felon program's operational efficiency and outcomes by exploring its existing telecommunications connectivity supporting other federal, state, and local programs. SSA currently has direct, on-line connections with every state that transmits and receives data supporting various other programs, including its program to suspend SSI, OASI, and DI benefits to prisoners. |
BLM is responsible for managing, as of July 2008, approximately 700 million acres of subsurface mineral resources: 655.5 million of these acres are not affected by oil and gas production and 44.5 million acres are leased for oil and gas operations. Of these 44.5 million acres, 11.7 million acres are in oil and gas producing status and 472,000 acres have surface disturbance related to oil and gas production. To manage BLM programs and land, the agency maintains a network of state offices, which generally conforms to the boundary of one or more states. The state offices are Alaska, Arizona, California, Colorado, Idaho, Montana, Nevada, New Mexico, Oregon, Utah, Wyoming, and Eastern States. BLM has little land in the eastern half of the United States, consequently, the Eastern States state office, in Springfield, Virginia, is responsible for managing land in 31 states. Figure 1 shows the boundaries of the 12 BLM state offices. When operators drill oil and gas wells, they typically remove topsoil from the well site and lay a well pad, where the drilling rig is located. Other equipment on site can include generators and fuel tanks. In addition, reserve pits are often constructed to store or dispose of water, mud, and other materials that are generated during drilling operations, and roads and access ways are often built to move equipment to and from the wells. Generally, these activities can degrade the environment in three ways: Air quality. Newly graded roads can produce dust, impairing air quality and visibility in the immediate area and downwind. Nitrogen oxides from diesel engines and compressors used at drilling sites can also degrade air quality. Water quality. Water draining off newly graded surfaces and roads or oil or water accidentally discharged during oil and gas production can increase the amount of sediment, salt, and pollutants discharged into rivers and streams, thereby degrading them. In addition, shallow aquifers can be polluted if required protective measures are not in place, and the production of methane gas from coal beds can deplete shallow aquifers that serve as domestic water sources. Habitat. A high density of drilling and production equipment can, in extreme situations, change the appearance of the landscape from a natural setting to an industrial zone. In addition, the noises, smells, and lights from trucks, drilling and construction equipment, and production facilities can disturb wildlife and people living nearby. Under FLPMA, BLM must manage federal lands for multiple uses, including recreation and mineral extraction, as well as for sustained yield. To that end, FLPMA requires BLM to develop resource management plans, known as land use plans. In developing its land use plans, BLM determines, among other things, which parcels of land will be available for oil and gas development. According to BLM officials, parties interested in leasing federal minerals submit an Expression of Interest or pre-sale offer on those lands they are interested in leasing. These are then reviewed and if the lands are eligible to be leased, are placed up for competitive oil and gas lease sale. Leases can vary in size reaching 2,560 acres for lands in the lower 48 states and 5,760 acres for lands in Alaska. Operators that have obtained a lease must submit an application for a permit to drill to BLM before beginning to prepare land or drilling any new oil or gas wells. The complete permit application package is a lengthy and detailed set of forms and documents, which, among other things, must include proof of bond coverage and a surface use plan of operations; this surface use plan must include a reclamation plan that details the steps operators propose to take to reclaim the site. However, operators generally do not have to submit cost estimates for completing the reclamation. The Mineral Leasing Act of 1920, as amended, requires that federal regulations ensure that an adequate bond or surety is established before operators begin to prepare land for drilling. The bond is intended to ensure complete and timely reclamation. Accordingly, federal regulations require the operator to submit a surety or personal bond to BLM, which is intended to ensure compliance with all of the lease’s terms and conditions, including reclamation requirements. Surety bonds are a third-party guarantee that an operator purchases from a private insurance company approved by the Department of the Treasury, and personal bonds must be accompanied by one of the following five financial instruments: certificates of deposit issued by a financial institution whose deposits are cashier’s checks; negotiable Treasury securities, including U.S. Treasury notes or bonds, with conveyance to the Secretary of the Interior to sell the security in case of default in the performance of the lease’s terms and conditions; and irrevocable letters of credit that are issued for a specific term by a financial institution whose deposits are federally insured, and meet certain conditions. In reviewing the application for a permit to drill, BLM (1) evaluates the operator’s proposal to ensure that the proposed drilling plan conforms to the land use plan and applicable laws and regulations and (2) inspects the proposed drilling site to determine if additional site-specific conditions must be addressed before the operator can begin drilling. After BLM approves a drilling permit, the operator can drill the well and commence production. After drilling the well, the operator may perform interim reclamation—the practice of reclaiming surfaces that were disturbed to prepare a well for drilling but that are no longer needed. For example, operators may need a 10-acre drill pad to safely drill a series of wells. However, once the wells are drilled, operators may only need 4 acres to safely service the wells over their lifetime. In this case, the operator could reseed and regrade the 6 acres of the initial pad that are no longer needed. While BLM does not generally require interim reclamation in all permits it issues, it may decide to add interim reclamation as a requirement in drilling permits for specific oil and gas developments. Final reclamation occurs when an operator determines, and BLM agrees, that a well has no economic value. The terms of final reclamation are included in the lease and the drilling permit. The operator must follow the agreed-upon final reclamation plan, including plugging the wells, removing all visual evidence of the well and drill pad, recontouring the affected land, and revegetating the site with native plant species. In general, the goal is to reclaim the well site so that it matches the surrounding natural environment to the extent possible. BLM then inspects the site to monitor the success of the reclamation, a process that typically takes several years. Once BLM determines that reclamation efforts have been successful, it approves a Final Abandonment Notice. However, in some circumstances, the operator may delay performing reclamation and instead allow the well to remain idle for various reasons. For example, expected higher oil and gas prices may once again make the well economically viable to operate, or the operator may decide to use the well for enhanced recovery operations, for example using the well to inject water into the oil reservoir and push any remaining oil to operating wells. Under BLM policy, the agency must periodically review the status of these idle wells to ensure that the operator has legitimate reasons for allowing the wells to remain idle. According to BLM officials, the primary purpose of idle-well reviews is to ensure that these wells do not become orphaned—that is, they lack a bond sufficient to cover reclamation costs and there are no responsible or liable parties to perform reclamation. States have adopted laws and regulations governing oil and gas development on state and private lands, including bond and reclamation requirements. In addition, other Interior programs and offices that are responsible for managing the extraction of other federally owned resources have bond and reclamation requirements. Specifically, those programs and offices are: BLM Geothermal Resource Leasing. BLM issues leases for the development of geothermal resources on federal lands; these resources are used to develop electricity by capturing the geothermal heat generated in the earth’s core. BLM Hardrock Minerals Claims. BLM oversees the process for staking claims and extracting hardrock minerals on the lands it manages. These minerals are also referred to as locatable minerals and include gold, silver, and copper, among others. BLM Mineral Materials Sales. BLM oversees the sale of these minerals, such as sand and gravel, from federal lands. These minerals are also sometimes referred to as salable minerals. BLM Solid Minerals Leasing. BLM issues leases for the extraction of these minerals on federal lands; solid minerals are minerals other than coal and oil shale, and include silicates, potash, and phosphate. Solid minerals are also sometimes referred to as leasable minerals. Minerals Management Service (MMS) Offshore Oil and Gas Leasing. MMS issues leases to develop offshore oil and gas resources in the Gulf of Mexico, off the Atlantic coast, and off the Pacific coast states of California, Oregon, Washington, and Hawaii. Office of Surface Mining Reclamation and Enforcement (OSM) Coal Leasing. OSM regulates the surface mining of coal. States can choose to develop their own programs to regulate surface mining if that program is in accordance with federal law and approved by OSM. OSM is charged with enforcing states’ adherence to their approved programs or implementing a federal program if the state fails to submit, implement, or enforce its program. As of December 2008, oil and gas operators had provided 3,879 surety and personal bonds, valued at approximately $162 million, to ensure compliance with all lease terms and conditions for 88,357 wells, according to our analysis of BLM data. BLM officials told us that the bond amounts are generally not based on the full reclamation costs for a site that would be incurred by the government if an operator were to fail to complete the required reclamation. Rather, the bond amounts are based on regulatory minimums intended to ensure that the operator complies with all the terms of the lease, including paying royalties and conducting reclamation. As of December 1, 2008, the 88,357 oil and gas wells were covered by 16,809 leases, with 70 percent of all wells located in New Mexico and Wyoming. Cumulatively, Wyoming and New Mexico have more than four times as many wells as the total number of wells in Utah and California, which are the states with the third and fourth most wells at 7,388 and 7,215, respectively. Table 1 shows the number of oil and gas wells and leases located in the nine BLM state offices. According to our analysis of BLM’s data, as of December 1, 2008, oil and gas operators had 3,879 bonds valued at approximately $162 million to ensure compliance with lease terms and conditions for 88,357 wells on federal land. Fifty-two percent of these bonds—2,086—were surety bonds valued at approximately $84 million, and 48 percent—1,793—were personal bonds valued at almost $78 million. The number of wells and the value of bonds held by BLM have increased over the past 20 years. The value of bonds increased from approximately $69 million as of September 30, 1988, to approximately $164 million as of September 30, 2008, as the number of wells increased from almost 50,000 to more than 85,000. As figure 2 shows, this increase in the number of wells occurred primarily in the last decade. The Mineral Leasing Act of 1920, as amended, requires that federal regulations ensure that an adequate bond or surety is established that ensures complete and timely reclamation. Under BLM regulations, bonds are conditioned upon compliance with all of the terms and conditions of the lease, including but not limited to, paying royalties, plugging wells, reclaiming disturbed land, and cleaning up abandoned operations. To ensure operators meet legal requirements, including reclamation, BLM regulations require them to have one of the following types of coverage: individual lease bonds, which are to cover all wells an operator drills under one lease; statewide bonds, which are to cover all of an operator’s leases in one nationwide bonds, which are to cover all of an operator’s leases in the other bonds, which include both unit operator bonds that cover all operations conducted on leases within a specific unit agreement, and bonds for leases in the National Petroleum Reserve in Alaska (NPR-A). BLM regulations establish a minimum bond amount in order to ensure compliance with all legal requirements and also authorize or require BLM to increase the bond amount in certain circumstances. These minimum bond amounts were set in the 1950s and 1960s and have not been updated. Specifically, the bond minimum of $10,000 for individual bonds was last set in 1960, and the bond minimums for statewide bonds—$25,000—and for nationwide bonds—$150,000—were last set in 1951. If adjusted to 2009 dollars, these amounts would be $59,360 for an individual bond, $176,727 for a statewide bond, and $1,060,364 for a nationwide bond. Figure 3 shows the current amounts set in 1951 and 1960 and what these amounts would be if adjusted to 2009 dollars. Of the three primary bond categories—individual, statewide, and nationwide—statewide bonds accounted for most of the bonds covering oil and gas wells. Figure 4 shows the value and percentage distribution of the bonds by type. Appendix II provides more detailed information on the number and value of BLM-held bonds by state. While BLM regulations set minimum amounts for bonds, they also require bonds in an increased amount in certain circumstances and authorize BLM to require an increased bond amount when the operator poses a risk due to certain factors. First, when an operator who has failed to plug a well or reclaim lands in a timely manner that resulted in BLM making a demand on a bond in the prior 5 years applies for a new permit to drill, BLM must require a bond in an amount equal to the BLM cost estimate for plugging the well and reclaiming the disturbed area if the cost estimate is higher than the regulatory minimum. Second, BLM officials may require an increase in the amount of any bond when the operator poses a risk due to factors that include, but are not limited to, a history of previous violations, a notice from MMS that there are uncollected royalties due, or the fact that the total cost of plugging existing wells and reclaiming lands exceeds the present bond amount based on BLM estimates. According to BLM data, the agency spent about $3.8 million to reclaim 295 orphaned wells in 10 states from fiscal years 1988 through 2009. The 10 states where orphaned wells were reclaimed include California, Colorado, Montana, New Mexico, North Dakota, Oklahoma, Ohio, Utah, West Virginia, and Wyoming. Some of these states, such as Ohio and West Virginia, do not currently produce high volumes of oil and gas compared with other states in the West, although they did in the late 1800s and early 1900s. Although reclamation costs averaged $12,788 per well, the amount spent to reclaim wells varied by reclamation project, state, and fiscal year. For example: Cost per project. The amount spent per reclamation project varied from a high of $582,829 for a single well in Wyoming in fiscal year 2008, to a low of $300 for three wells in Wyoming in fiscal year 1994. These variations are due to differences in the amount of surface and subsurface disturbance and the amount of effort required to reclaim these wells. Number of wells and spending by state. The number of wells reclaimed and the amount spent in each state also varied considerably. California had the most orphaned wells reclaimed—140 of the 295 wells reclaimed, or about 47 percent—while Colorado and West Virginia had the fewest, each with 1 reclaimed well. However, over one-third of the amount spent to reclaim orphaned wells—about $1.3 million—went toward reclaiming 44 wells in Wyoming. Amount spent per year. In the fiscal years that BLM spent funds to reclaim orphaned wells, the amount spent in each fiscal year varied from a high of $632,829 to reclaim two wells in 2008, to a low of $24,962 to reclaim a single well in Ohio in fiscal year 2001. BLM had no expenditures to reclaim orphaned wells in fiscal years 1989 through 1991, 1996 through 1998, or in 2005. BLM officials explained that orphaned wells were not reclaimed in those years because the decision to do so is left to the discretion of BLM state office officials. Further, there is no dedicated budget line item to fund orphaned well reclamation; instead, it is dependent on whatever funds are available from BLM state offices and the BLM Washington office. Table 2 provides a summary of the number of wells reclaimed, the expenditures per year, and the states where reclamation occurred by year; table 3 shows the number of wells reclaimed and expenditures by state. BLM has identified an additional 144 orphaned wells on BLM and other federal land that need to be reclaimed in seven states. Although BLM reclamation estimates were not available for all of these wells, officials in BLM field offices have completed reclamation cost estimates for 102 of the 144 wells, for a total estimated cost of $1,683,490. More than half of these wells for which BLM has estimated costs are in Oklahoma—the state with the highest concentration of orphaned wells. The estimated reclamation costs in each state differ substantially—from an average cost per well in Wyoming of $93,641 to a low of $9,100 in Arizona. These differences are due to such factors as well age, well depth, the amount of surface disturbance, and costs for materials and labor. Table 4 shows the orphaned wells and the estimated reclamation costs by state; table 5 shows the wells by surface management agency. In addition, BLM is responsible for reclaiming 67 wells in Alaska that are commonly referred to as legacy wells. Unlike orphaned wells, which were drilled by private-sector operators, legacy wells were drilled by the U.S. Navy and the U.S. Geological Survey from the early 1900s to 1981 on what was then the Naval Petroleum Reserve No. 4—a 23-million-acre roadless area 200 miles north of the Arctic Circle. The wells were drilled to evaluate the mineral potential of the area and to test arctic oil and gas exploration and engineering practices. In 1976, the reserve was renamed the National Petroleum Reserve-Alaska (NPR-A) and its administration was transferred to BLM—including responsibility for reclaiming those wells drilled prior to the transfer. Because of the remote location and difficult weather conditions in the NPR-A, mobilizing equipment and personnel to perform reclamation can be unusually expensive. For example, BLM estimates that reclaiming one well—known as Drew Point #1—will cost $23.6 million, owing in part to the well’s close proximity—less than 500 feet—to the Arctic Ocean, which is eroding the shore nearby. Although estimates are not available for reclaiming all 67 of these legacy wells, BLM estimated in 2004 that reclaiming 37 high-priority legacy wells would exceed $40 million. Like BLM, states have bonding requirements for oil and gas operations. However, in most states, bond amounts reflect some of the well’s characteristics and are generally higher than BLM’s minimum amounts. The states with regulatory minimum bond amounts not based on well characteristics generally have minimum amounts higher than BLM’s minimum amounts. In addition, federal regulations for other resources generally require the bonds to reflect the cost of reclamation or have minimum bond amounts that have been more recently established. The 12 western states have bonding requirements for oil and gas operations that differ in their approach from BLM’s onshore oil and gas bonding requirements. The states use bonds that cover either all wells in the state (similar to BLM’s statewide bond but referred to as statewide blanket bonds), multiple wells in the states (referred to as blanket bonds), or an individual well. Regarding the amount of bond required, the 12 western states generally either use a minimum bond amount established by regulation regardless of the well’s characteristics or determine bond amounts based either on the depth of the well(s) or on the total number of wells covered by the bond. The latter approach is often more complex than the regulatory minimum requirements and triggers increases in bond amounts when certain additional factors come into play. For example: For individual wells, Wyoming determines bond amounts based on well depth. If the well is less than 2,000 feet deep, the state requires a bond of at least $10,000, and if the well is 2,000 feet or deeper, the state requires a bond of at least $20,000. For statewide bonds, the minimum bond amount is $75,000. However, Wyoming may require an additional bond, currently in the amount of $10 per foot of well depth, when a well is not producing, injecting, or disposing after an operator’s total footage of idle wells reaches a certain threshold. Finally, the amount of this additional bond will increase every 3 years in accordance with the percentage change in Wyoming consumer price index. For statewide bonds, California uses an approach that considers the number of wells and imposes an additional requirement on operators with idle wells. If an operator has 50 or fewer wells, then the bond amount is set at $100,000; if an operator has more than 50 wells exclusive of properly abandoned wells, the bond amount is set at $250,000. In addition to these bond amounts, operators must either (1) pay an annual fee for each idle well, (2) establish an escrow account of $5,000 for each idle well, (3) provide a $5,000 bond per idle well, or (4) have filed a management and elimination plan for all long-term idle wells. In lieu of complying with this requirement for idle wells, operators can post a $1 million statewide bond. In contrast, BLM’s method for deciding when and how much to increase the minimum bond amount is not automatic, unless the operator has previously failed to plug a well or reclaim lands; rather, it is based on the judgment of field and state office officials. Table 6 shows the 12 western states’ bonding requirements. The 12 western states generally require bond amounts that are at least equal to or higher than the minimum amount BLM requires for its individual lease and statewide bonds, or determine the bond amount based on well depth or number of wells covered by the bond. For example: The 4 states that require minimum bond amounts for individual wells regardless of well depth—Alaska, Idaho, Nevada, and Washington—set minimum bond amounts at $100,000, $10,000, $10,000, and $50,000 per well, respectively. Because these bond amounts are required for each well, in most circumstances they are generally higher than BLM’s minimum amount of $10,000 for individual lease bonds since most BLM leases have more than one well. The 7 states whose regulations establish a bond amount or minimum bond amount for statewide or blanket bonds regardless of a well’s characteristics—Alaska, Idaho, Montana, Nevada, New Mexico, Washington, and Wyoming—have amounts that range from a high of $250,000 in Washington to low of $25,000 in Idaho. All states except Alaska, Idaho, Nevada, and Washington determine the amount of individual well bonds based, at least in part, on well depth. Three of the 9 states whose regulations provide for statewide bonds— California, Colorado, Utah—also determine the amount based on well depth or the number of wells covered by the bond. Because of the nature of these approaches, it is difficult to compare them with BLM’s bonding requirements to determine which would result in the higher bond amount. However, these approaches are generally more sophisticated than minimum requirements in that they associate the bond amount with the amount of drilling, which may reduce the potential liability to the states in cases where the operator fails to perform the necessary reclamation. See appendix III for detailed information on the bonding requirements in each of the 12 western states. Regulations governing the extraction of other resources owned by the federal government generally require (1) bond amounts that consider the cost of reclamation, which reduces the government’s potential liability for reclamation costs; or (2) use minimum amounts that were established more recently than the amounts for BLM oil and gas bonds. First, bonding requirements for the extraction of coal and hardrock minerals—such as gold, silver, and copper—require operators to post bonds that cover the full estimated cost of reclamation. These requirements reduce the potential reclamation liability to the federal government should the operations fail to perform the necessary reclamation. Second, for the remaining types of federally owned resources, minimum bond amounts are established by regulation. These regulations are similar to BLM’s regulations; however, these regulatory minimum amounts generally have all been established or updated since BLM established its current regulatory minimums for oil and gas leases in 1951 for statewide and nationwide bonds, and in 1960 for individual lease bonds. Table 7 provides a summary of the type and amount of bonds required for the extraction or use of federally owned resources. Additional detail on the structure, amount, and types of bonds permitted is contained in appendix IV. GAO provided Interior with a draft of this report for its review and comment. Interior provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to interested congressional committees; the Secretary of the Interior; and the Director of the Bureau of Land Management. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. This appendix details the methods we used to examine three aspects of the Department of the Interior’s (Interior) Bureau of Land Management (BLM) bonding requirements for BLM oil and gas leases and reclamation of oil and gas wells. Specifically, we were asked to (1) determine the types, value, and coverage of bonds held by BLM for oil and gas operations; (2) determine the amount that BLM has paid to reclaim orphaned wells over the past 20 years, and the number of orphaned wells BLM has identified but has not yet reclaimed; and (3) compare BLM’s bonding requirements for oil and gas operations with the bonding requirements the 12 western states use for oil and gas operations on state and private lands and other Interior agencies’ bonding requirements for other resources. Overall, we reviewed federal regulations and BLM guidance on bonding for oil and gas leases, and discussed this guidance and a broad range of issues related to how BLM oversees bonding for oil and gas leases during interviews with bonding officials at BLM state offices and field offices in Colorado and Wyoming—two states that have a large number of oil and gas wells and administer bonds that account for a significant amount of the value of BLM-held bonds. For objective one—to determine the number, value, and coverage of bonds, as of December 2008—we analyzed data from BLM’s bond and surety system, and Automated Fluid Minerals Support System (AFMSS), and met with agency officials who administer the systems. From the bond and surety system, we received 13 tables from BLM containing 747,926 records on bonds from June 19, 1925, to December 17, 2008. We also received 9 tables containing 106,705 records on wells from January 7, 1930, to August 20, 2009 from BLM’s AFMSS. Because the bond and surety system contains records on bonds that have been terminated and do not have any well liability attached, we first determined which records contained active bonds. Because bond data were limited to records before December 17, 2008, we selected the first day of the final month for which we had data, December 1, 2008. We corroborated the number of active bonds using a range of different methodologies that uses other data in the bond and surety system and confirmed that the list of active bonds was sufficiently complete for the purposes of our analysis. To determine the number of bonds, we selected all active bonds as of December 1, 2008, in the bond and surety system and grouped them by bond type into surety or personal bonds. BLM’s data further identified personal bonds as letter of credit, time deposit, Treasury security, and guaranteed remittance. We analyzed 43 C.F.R. § 3104.1, which addresses bond types, and spoke to BLM officials, before deciding to group the various types of personal bonds into a single personal bond category. To determine the value of bonds, we selected all active bonds as of December 1, 2008, in the bond and surety system and grouped them by unique bond file number. To calculate the total value of all active bonds, we summed the bond amount for all unique bonds. We also grouped bonds by bond type and bond coverage type to calculate the value for each group. Finally, we grouped all bonds by BLM state office using the administrative state field in the bond and surety system and summed the amount of all bonds for each BLM state office, as well as categorizing bonds by bond type and bond coverage type. For bond coverage, we selected active bonds as of December 1, 2008, from the bond and surety system and grouped them by the following categories: individual, statewide, nationwide, and other. The other category included collective (unit), blanket bonds, and bonds for the National Petroleum Reserve in Alaska. We analyzed 43 C.F.R. §§ 3104.2-3104.4 and spoke with BLM officials to determine the appropriate bond coverage type categories, creating the other category for the 6 percent of bonds not typically used for current wells. To determine the number of wells, we received and analyzed data BLM generated from the AFMSS database that included records current as of August 20, 2009. The set of data received from BLM excluded all wells that had been reclaimed prior to this date and whose bonds had been released, helping to ensure that our data only included wells that required a bond. To have the well data match the bonding data, we selected all well records in AFMSS that were drilled before December 1, 2008. We identified wells using the well’s unique American Petroleum Institute number, which is assigned when the well is drilled. In addition to information on producing wells, the data also included information on wells that were shut in (i.e., could return to production) and temporarily abandoned (i.e., could be used for a purpose other than producing oil or gas). We also grouped these wells by their BLM state office using a location field in AFMSS. To determine the number of leases, we grouped the number of wells listed before December 1, 2008, by unique lease number, and analyzed these leases by state using the location field of the lease within AFMSS. Because the AFMSS system can generate current data only, our analysis excludes those wells that were reclaimed between December 1, 2008, and August 20, 2009. Although these wells were not included in our totals, we concluded the data were sufficiently reliable for the purpose of our analysis, as data published in BLM’s Public Land Statistics show that only 231 wells were plugged and abandoned in all of fiscal year 2008. We also compared our total number of wells with the total number of wells in the fiscal year 2008 BLM Public Land Statistics. We determined that the difference between our total for December 1, 2008, and BLM’s total for September 30, 2008—a difference of about 3 percent—did not significantly affect our analysis. For figure 2 in the report—the number of wells and value of bonds, from September 30, 1988, to September 30, 2008 (the most current date for which BLM data were available)—we selected five dates at 5-year intervals for the past 20 years, and calculated the total value of all bonds using data in the bond and surety system and the number of wells from BLM Public Land Statistics. We used the following dates to assess coverage: September 30, 2008; September 30, 2003; September 30, 1998; September 30, 1993; and September 30, 1988. For each of these dates, we selected all active bonds, providing us with those bonds that were accepted, but not terminated, before each of the five dates. To calculate the total value of these bonds, we grouped unique bonds for each of the five dates, and summed the bond amount field in the bond and surety system. To calculate well totals, we were limited by the dynamic nature of AFMSS, which restricted us from calculating the number of active wells for specific dates in the past. Due to this limitation, we relied on BLM’s Public Land Statistics for the well totals for our specified dates. For figure 3 in the report—individual, statewide, and nationwide current bond minimums adjusted to 2009 dollars—we used the bond minimums established in 43 C.F.R. §§ 3104.2, 3104.3 and searched the Federal Register to determine the dates the bond minimums were established. We then calculated the amount of each bond minimum in 2009 dollars. We reviewed the reliability of the data we used from the bond and surety system and AFMSS and found these data sufficiently reliable for the purpose of our review, including: total number of bonds, total number of wells, number and value of bonds by bond type, number and value of bonds by coverage type, number of wells by state, number of leases by state, number and value of bonds by state, average value of bonds by state office, number and value of bond types by state office, and number and value of coverage types by state office. To test the sufficiency of the bond and surety system and AFMSS data used to calculate the number, types, values, and coverage of bonds, we electronically tested the database and conducted interviews with BLM staff responsible for the integrity of the data. We also electronically tested all fields related to our analysis, including tests for null values, duplicate records, accurate relationships between code and text fields, and outliers. We also conducted 20 interviews with BLM staff between December 12, 2008, and November 13, 2009, on the following topics: data entry, use of data, completeness of data, accuracy of data, edit checks, supervisory oversight, internal reviews, different data fields, and data limitations. We determined that there were no significant issues with the bond and surety system and AFMSS data we used to calculate the number, types, value, and coverage of bonds. To address our second objective—determine how much BLM has paid to reclaim orphaned wells over the past 20 years, and how many wells BLM has yet to reclaim—we obtained data collected by BLM officials from BLM field and state offices. To determine the expenditures for reclaiming orphaned wells, we obtained data for fiscal years 1988 through November 30, 1994, from a 1995 BLM report. We obtained data through fiscal year 2009 from BLM officials. These data included federal dollars paid to reclaim orphaned wells, the number of wells reclaimed, and their location. To determine the number of orphaned wells yet to be reclaimed, we reviewed BLM’s Instructional Memorandum No. 2007-192, which directs BLM field office staff to report data on orphaned wells to BLM’s Washington Office. The Instructional Memorandum directs field office staff to complete an “Orphaned Well Scoring Checklist” for each orphaned well identified. This checklist asks for such information as the well’s location; well name; and other factors relating to reclamation, such as the well depth or estimated reclamation cost. We reviewed these checklists and analyzed all available estimated reclamation amounts. We then calculated and summarized estimated reclamation cost data by state and surface management agency. To address our third objective—compare BLM’s bonding methods with those used by the 12 western states and other Interior agencies—we analyzed state oil and gas bonding laws and regulations, as well as federal bonding regulations for the extraction or use of other federally owned resources. These federal agencies and resources included BLM Geothermal Energy, BLM Hardrock Minerals, BLM Mineral Materials, BLM Solid Minerals, Mineral Management Service Offshore Oil and Gas Leasing, and Office of Surface Mining Reclamation and Enforcement Coal Leasing. We summarized the bonding requirements, including scope, structure, amount, and method for determining bond amounts. We conducted our work from January 2009 to January 2010 in accordance with all sections of GAO’s Quality Assurance Framework that are relevant to our objectives. The framework requires that we plan and perform the engagement to obtain sufficient and appropriate evidence to meet our stated objectives and to discuss any limitations in our work. We believe that the information and data obtained, and the analysis conducted, provide a reasonable basis for any findings and conclusions. This appendix provides information on BLM held oil and gas bonds from BLM’s AFMSS and bond and surety systems, including the number, value, and average value of all BLM held bonds (table 8); the number and value of surety and personal bonds (tables 9 and 10); and the number and value of individual, statewide, nationwide, and other bonds (tables 11 and 12). Single well bond. Blanket bond covering all of the operator’s wells in the state. Surety bond by an authorized insurer who is in good standing. unless the applicant demonstrates that the cost of well abandonment (plugging) and location clearance will be less than $100,000. Personal bond and security in the form of (1) a certificate of deposit, (2) irrevocable letter of credit, or (3) an otherwise adequate security. Not less than $200,000. Conditioned on drilling, plugging dry or abandoned wells, repairing wells causing waste or pollution, maintaining and restoring well sites, and acting in accordance with the applicable laws and regulations. Individual well bond. Blanket bond to cover multiple wells. $10,000 for wells 10,000 feet or less deep. Surety bond by a corporate surety authorized to do business in Arizona. $20,000 for wells deeper than 10,000 feet. Certified check. from a bank whose deposits are federally insured. Although the state Oil and Gas Conservation Commission is authorized to issue a rule requiring an additional bond if the surface landowner is not in a contractual relationship with drilling permittee, no such rule has been issued. $25,000 for 10 or fewer wells. $50,000 for more than 10 but fewer than 50 wells. $250,000 for 50 or more wells. Conditioned on compliance with all statutory requirements for drilling, redrilling, deepening, or permanently altering the casing of the well. Individual indemnity bond. Indemnity bond. Blanket indemnity bond covering all wells in the state. $15,000 for wells less than 5,000 feet deep. Idle well indemnity bond. $20,000 for wells at least 5,000 feet deep but less than 10,000 feet deep. $30,000 for each well 10,000 or more feet deep. that (1) does not exceed the federally insured amount, (2) is insured, and (3) is issued by a bank or savings association authorized to do business in California. $250,000 plus the idle well bond. If a bond was provided prior to Jan. 1, 1999, its amount must be increased by a minimum of $30,000 per year beginning on Jan. 1, 2000, until the bond reaches $250,000. Savings accounts and evidence of the deposit in the account. The account cannot exceed the federally insured amount, must be federally insured and with a bank authorized to do business in California. $100,000 plus the idle well bond for operators having 50 or fewer wells, exclusive of properly abandoned wells. $1 million dollars. Investment certificates or share accounts issued by savings associations authorized to do business in California. The account’s balance cannot exceed the federally insured amount and must be insured. $5,000 per well, if the operator chooses to post a bond rather than pay an annual fee, open an escrow account, or have filed a management plan by July 1, 1999. share accounts issued by credit unions whose share deposits are guaranteed. The account’s balance cannot exceed the guaranteed amount. Every operator must provide assurance that it is financially capable of fulfilling applicable requirements (1) to protect the health, safety, and welfare of the general public in the conduct of the oil and gas operations; (2) to ensure proper reclamation of the land and soil affected by oil and gas operations and to ensure the protection of the topsoil of said land during such operations; and (3) associated with terminating operations and permanent closure. (1) Surface Owner Protection Financial Assurance: individual well or statewide blanket bond. The Oil and Gas Conservation Commission has the authority to increase any of these amounts for an operator under certain circumstances. operator has sufficient net worth to guarantee performance, which the Commission must review annually. To protect surface owners who are not parties to a lease or other agreement with the operator from unreasonable crop loss or land damage. (1) Surface Owner Protection liability insurance. Bond or other surety instrument. (2) Soil Protection, Plugging, Abandonment and Site Reclamation Financial Assurance: individual or statewide blanket bond. $2,000 per well for non- irrigated land. Letter of credit. $5,000 per well for irrigated land. Certificate of deposit. instrument. $25,000. Escrow account or sinking fund. (2) Soil Protection, Plugging, Abandonment and Site Reclamation $10,000 per well for wells less than 3,000 feet deep. Lien or other security interest in real or personal property of the operator that is acceptable to the Commission and reviewed annually. $20,000 per well for wells greater than or equal to 3,000 feet deep. $60,000 for less than 100 wells. $100,000 for 100 or more wells. If the operator has excess inactive wells, the financial assurance amount increases by $10,000 for each excess inactive well less than 3,000 feet deep. $20,000 for each excess inactive well greater than or equal to 3,000 feet deep. The Commission can modify or waive this increase if the operator submits a plan for (1) returning the wells to production in a timely manner or (2) plugging and abandoning the wells on an acceptable schedule. Additional finance assurances required for off- site, centralized exploration and production waste management facility and seismic operations. Conditioned upon compliance with the legal and regulatory requirements for drilling, maintaining, operating, and plugging of each oil and gas well. Individual well bond. Statewide blanket bond. Individual bond of not less than $10,000 per well. Blanket bond of not less than $25,000 for all wells in the state. Surety bond by a corporate surety authorized to do business in Idaho. Cash. Separate bond requirements govern wells on state and school lands. Conditioned on properly plugging each dry or abandoned well and restoring the surface of the location. Single well bond. Multiple well bond. $1,500 if the well’s depth is 2,000 feet or less. The Board of Oil and Gas Conservation can increase the bond requirement to $3,000 under certain circumstances. Surety bond issued from a company licensed to do business in Montana. Federally insured certificate of deposit held by a Montana bank. $5,000 if the well’s depth is greater than 2,000 feet and less than 3,501 feet. The Board can increase this amount to $10,000 under certain circumstances. Letter of credit issued by a Montana commercial bank whose deposits are FDIC insured. $10,000 where the well’s depth is 3,501 feet or more. The Board can increase this amount to $20,000 under certain circumstances. $50,000. The Board can increase this amount to $100,000 under certain circumstances and/or limit the number of multiple wells that can be covered by a multiple bond. If existing wells are covered by a bond with an amount less than $25,000, the owner or operator must increase coverage to $25,000. Conditioned on (1) dry or abandoned well being plugged in accordance with state regulations and (2) operation and repair of well in a manner that does not cause waste. Individual well bond. Blanket statewide bond. Individual well bond of not less than $10,000. Blanket statewide bond of not less than $50,000. Bond issued by a corporate surety authorized to do business in Nevada and approved by the state regulatory agency. Cash deposit. Savings certificate or time certificate of deposit issued by a bank or savings or loan association in Nevada. being plugged and abandoned and the location restored and remediated in compliance with applicable rules. The financial assurance is not to secure payment for damages to livestock, range, crops or tangible improvements or any other purpose. assurance. Irrevocable letter of credit that meets certain conditions. Blanket financial assurance for all wells statewide. $5,000 plus $1 per foot of well depth in certain counties. $10,000 plus $1 per foot of well depth in all other counties. federally insured account in New Mexico. Surety bond that meets certain conditions. $50,000. Insurance policy that meets certain requirements. Wells that have been in temporary abandonment for more than 2 years must be covered by a one-well financial assurance, unless the well is shut-in because of the lack of a pipeline connection. Bond will not be released unless well has been properly abandoned, including site reclamation. Single well bond. Surety bond. Blanket bond for multi- well operations. $10,000 for wells less than 2,000 feet deep. $15,000 for wells between 2,000 and 5,000 feet deep. $25,000 for wells greater than 5,000 feet deep. The Department of Geology and Mineral Industries has the discretion to accept an irrevocable letter of credit or other form of financial security. Amount equals the sum of individual bonds required for the wells, although some wells might be excluded from this calculation. $100,000. Conditioned upon the operator plugging each dry or abandoned well, repairing each well causing waste or pollution, and maintaining and restoring the well site. Individual well bond. Statewide blanket bond. At least $1,500 for a well less than 1,000 feet deep. Surety bond with performance guarantee of a corporation that meets certain requirements. At least $15,000 for a well more than 1,000 feet deep but less than 3,000 feet deep. $30,000 for a well more than 3,000 feet deep but less than 10,000 feet deep. At least $60,000, for wells more than 10,000 feet deep. At least $15,000 for wells less than 1,000 feet deep. At least $120,000 for wells more than 1,000 feet deep. If the Division determines that these amounts will be insufficient to cover the costs of well plugging and site restoration, a change in the form or amount of bond coverage may be required. The Board has the discretion to allow bond coverage in a lesser amount for a specific well. (3) negotiable certificates of deposit issued by a federally insured bank authorized to do business in Utah that do not exceed FDIC insurance limits; (4) irrevocable letter of credit that meets certain requirements. Since July 1, 2003, operators who want to establish a new blanket bond that consists either fully or partially of a collateral bond must be qualified by the Division first. If the Division finds that a well is violating regulatory requirements for shut-in and temporarily abandoned wells, the required bond amount increases to the cost of actual plugging and site restoration costs. A combination of a surety and collateral bond. Individual well bond. each dry or abandoned well, reclaiming and cleaning up the well drilling site, repairing wells that cause waste, and complying with all applicable laws, regulations, orders, and permit conditions, including regulations and guidelines for reclamation of land impacted by oil and gas drilling and production activities. Surety bond that meets certain requirements. Statewide blanket bond. Not less than $50,000 for most wells. Cash deposit. $20,000 for wells less than 2,000 feet deep drilled solely to obtain subsurface geological data. Savings account assigned to the state. Not less than $250,000. a Washington bank and guarantee of payment of the principal in the event penalties are assessed for early redemption. Letter of credit from bank acceptable to the State Oil and Gas Supervisor. Conditioned on (1) the well being operated and maintained so as not to cause waste or damage to the environment; (2) plugging each permanently abandoned well in accordance with regulations; (3) reclamation of area affected by the oil or gas operations; and (4) compliance with all applicable laws, regulations, and orders. Individual well bond. The state Oil and Gas Conservation Commission can increase the amounts listed below after notice and a hearing if good cause can be shown. Surety bond. Statewide blanket bond. Cashier’s check and binding, first-priority pledge agreement. To secure payment of damages to the surface owner. Instead of posting a bond, the operator can execute an agreement with a surface owner (1) addressing compensation for damages to land and improvements; or (2) waiving the surface owner’s right to seek damages. $10,000 for wells less than 2,000 feet deep. for an initial term of not less than 1 year that renews automatically and a binding, first- priority pledge agreement. $20,000 for wells 2,000 feet or more deep. $75,000. Letter of credit issued by an FDIC-insured bank with an initial expiration date of not less than 1 year from date of issuance and that is automatically renewed. An increased bond level up to $10 per foot may be required for each idle well once the operator’s total footage of idle wells exceeds a certain threshold. The level of additional bonding will increase every 3 years in accordance with the percentage change in the Wyoming consumer price index. The operator can request a different bonding level based on evaluation of specific well conditions and circumstances. In lieu of additional bonding, the supervisor may accept a detailed plan of operation which includes a time schedule to permanently plug and abandon idle wells. Individual well bond of not less than $2,000 per well on the land. Blanket bond amount is determined by the oil and gas supervisor. The state’s oil and gas supervisor has discretion in establishing the amount of these bonds. Performance bond for the entire permit area. Cumulative bond schedule and the performance bond required for full reclamation of the initial area to be disturbed. Incremental bond schedule and the performance bond required for the first increment in the schedule. Alternative bonding system if it achieves certain objectives and purposes. The amount of the bond required for each bonded area shall (1) be determined by the regulatory authority; (2) depend upon the requirements of the approval permit and reclamation plan; (3) reflect the probable difficulty of reclamation, given consideration to such factors as topography, geology, hydrology, and revegetation potential; and (4) be based on, but not limited to, the estimated cost submitted by the permit applicant. Surety bond that meets certain requirements. The amount of the bond shall be sufficient to assure the completion of the reclamation plan if the work has to be performed by the regulatory authority in the event of forfeiture. Collateral bond (including cash; cash accounts that do not exceed FDIC insurable limits; certificates of deposit that do not exceed FDIC insurable limits and meet other requirements; a first mortgage, first deed of trust, or perfected first-lien security interest in real property; and irrevocable letters of credit that meet certain requirements). In no case shall the total bond initially posted for the entire area under one permit be less than $10,000. Self-bond (indemnity agreement executed by the applicant or the applicant and a corporate guarantor that meets certain requirements). The regulatory authority must adjust the bond amount from time to time as the area requiring bond coverage is increased or decreased or where the cost of future reclamation changes. A combination of any of these types. To guarantee compliance with all terms and conditions of the lease, including structure removal and site clearance. Lease specific bond. Areawide bond. Amount of bond is determined by stage of development/activity. Posting a lease exploration bond exempts owner/operator from posting a general lease bond. Posting a development and activities bond exempts the owner/operator from posting a general lease bond and lease exploration bond. Surety bond issued by a surety company approved by the Department of the Treasury (Treasury). Treasury securities. $50,000 lease specific bond. $300,000 areawide bond. Alternative types of securities provided the MMS Regional Director determines that the alternative protects the interest of the United States to the same extent as the required bond. $200,000 lease specific bond. A combination of these types. $1 million areawide bond. Development and production activities bond $500,000 lease specific bond. $3 million areawide bond. MMS can require an additional security if it determines that it is necessary to ensure compliance. Such a determination is based on an evaluation of the lessee’s ability to carry out present and future financial obligations. In lieu of an additional bond, MMS may authorize the lessee to establish a lease-specific abandonment account or third- party guarantee. To ensure compliance with the Mineral Leasing Act of 1920 as amended, including complete and timely plugging of the well(s), reclamation of the lease area(s), and the restoration of any lands or surface waters adversely affected by lease operations. Individual lease bond. Statewide bond. Nationwide bond. Surety bond issued by a qualified surety company approved by Treasury. Nationwide: not less than If an operator has forfeited a financial assurance in the previous 5 years because of failure to plug a well or reclaim lands in a timely manner, BLM must require a bond in an amount equal to the estimated costs of plugging the well and reclaiming the disturbed area before approving an application for a permit to drill. Personal bonds accompanied by (1) certificate of deposit issued by an institution whose deposits are federally insured; (2) cashier’s check; (3) certified check; (4) negotiable Treasury securities; or (5) irrevocable letter of credit that meets certain conditions. BLM has the authority to require an increase in the bond amount whenever it determines that the operator poses a risk due to factors including, but not limited to, a history of previous violations, a notice from MMS that there are uncollected royalties due, or that total cost of plugging existing wells and reclaiming lands exceeds the present bond amount. To ensure compliance with the all the lease terms, including rentals and royalties, conditions, and stipulations. Individual lease bond. Individual lease: $100,000. Reserve-wide bond. (either as a rider to existing nationwide bond or a separate bond). Surety bond issued by a qualified surety company approved by Treasury. Personal bonds secured by (1) certificate of deposit issued by an institution whose deposits are federally insured; (2) cashier’s check; (3) certified check; (4) negotiable Treasury securities; or (5) irrevocable letter of credit that meets certain conditions. To cover (1) any activities related to exploration, drilling, utilization, or associated operations on a federal lease; (2) reclamation of the surface and other resources; (3) rental and royalty payments; (4) compliance with applicable laws, regulations, notices, orders, and lease terms. Individual lease bond. Statewide activity bond. bond. BLM has the authority to increase the following bond amounts when (1) the operator has a history of noncompliance; (2) BLM previously made a claim against a surety company because someone covered by the current bond failed to plug and abandon a well and reclaim the surface in a timely manner; (3) a person covered by the bond owes uncollected royalties; or (4) the bond amount will not cover the estimated reclamation cost. Corporate surety bond issued by a surety company approved by Treasury. Individual: $10,000. Statewide: $50,000. Nationwide: $150,000. Personal bonds secured by (1) certificate of deposit issued by a federally insured financial institution authorized to do business in the United States; (2) cashier’s check; (3) certified check; (4) negotiable securities, such as Treasury notes; and (5) irrevocable letter of credit that meet certain conditions. Electrical Generation Facility: at least $100,000. Direct Use Facility: BLM will specify amount. Released when (1) all royalties, rentals, penalties, and assessments are paid; (2) all permit or lease obligations are satisfied; (3) site reclaimed; and (4) effective measures are taken to ensure that the mineral prospecting or development activities will not adversely affect surface or subsurface resources. Individual lease bond. Statewide bond (to cover all leases of the same mineral). BLM determines bond amounts considering the cost of complying with all permit and lease terms, including royalty and reclamation requirements. Surety bond issued by a qualified company approved by Treasury. cover all leases of the same mineral). Individual lease: minimum $5,000 (minimum $1,000 for prospecting permits). Statewide: minimum $25,000. Personal bonds in the form of a (1) cashier’s check; (2) certified check; (3) or negotiable Treasury bond. $75,000. To meet the reclamation standards specified in the mineral materials sales contract. Performance bond for contract. No performance bond required if sales contract from a community pit or common use area and party pays a reclamation fee. For contracts of $2,000 or more, BLM will establish bond amount to ensure it is sufficient to meet the contract’s reclamation standards. However, the amount must be at least $500. Corporate surety bond issued by a company approved by Treasury. that is issued by an institution whose deposits are insured and does not exceed the maximum FDIC insurable amount. For contracts of less than $2,000, BLM may require a bond. If BLM requires a bond, it cannot exceed an amount greater than 20 percent of the total contract amount. Cash bond. Irrevocable letter of credit from a bank or financial institution organized or authorized to do business in the United States. If party pays reclamation fee, no performance bond is required. bond of the United States. Individual financial guarantee covering a single notice or plan of operations. Blanket financial guarantee covering a statewide or nationwide operations. Amount is based on the estimated cost as if BLM were to contract with a third party to reclaim the operations according to the reclamation plan, including construction and maintenance costs for any treatment facilities necessary to meet federal and state environmental standards. Surety bonds that meet certain requirements. federal depository account of the U.S. Treasury. Irrevocable letters of credit from a financial institution organized or authorized to do business in the infrastructure maintenance costs needed to maintain the area of operations in compliance with applicable environmental requirements while third- party contracts are developed and executed. environmental damage or the operator has an excellent past record for reclamation. United States. In addition to the financial guarantee, BLM may require the establishment of a trust fund or other funding mechanism to ensure the continuation of long-term treatment to achieve water quality standards and for other long-term, post-mining maintenance requirements. The funding must be adequate to provide for construction, long-term operation, maintenance, or replacement of any treatment facilities and infrastructure, for as long as the treatment and facilities are needed after mine closure. deposit or savings accounts not in excess of the FDIC insurable amount. Negotiable U.S. government, state, and municipal securities or bonds maintained in a Securities Investors Protection Corporation-insured trust account by a licensed securities brokerage firm. Investment grade securities having a Standard and Poor’s rating of AAA or AA or an equivalent rating from a nationally recognized securities rating service that are maintained in a Securities Investors Protection Corporation-insured trust account by a licensed securities brokerage firm. insurance that meet regulatory requirements. financial assurances under state law or regulation. Trust funds or other funding mechanisms. In addition to the contact named above, Andrea Wamstad Brown (Assistant Director), Jeffrey B. Barron, Casey L. Brown, Jerome Sandau, Jeanette Soares, Anne Stevens, Carol Herrnstadt Shulman, and Walter Vance made key contributions to this report. | The Federal Land Policy and Management Act of 1976 directs the Department of the Interior (Interior) to manage lands for multiple uses while also taking any action to prevent "unnecessary or undue degradation" of the land. To do this, Interior's Bureau of Land Management (BLM), among other things, requires oil and gas operators to reclaim the land they disturb and post a bond to help ensure they do so. Despite these requirements, not all operators perform reclamation. If the bond is not sufficient to cover well plugging and surface reclamation and there are no responsible or liable parties, the well is considered "orphaned," and BLM uses federal dollars to fund reclamation. The 12 western states where most oil and gas production occurs and other Interior agencies also require bonds to ensure reclamation. GAO was asked to (1) determine the number, value, and coverage of bonds held by BLM for oil and gas operations; (2) determine the amount that BLM has paid to reclaim orphaned wells over the past 20 years and the number of orphaned wells BLM has identified but has not yet reclaimed; and (3) compare BLM's bonding requirements for oil and gas operations with those the 12 western states use for oil and gas operations on state and private lands and other Interior agencies' bonding requirements for other resources. Among other things, GAO analyzed BLM data on wells and BLM-held bonds, and interviewed BLM officials. According to GAO's analysis of BLM data, as of December 2008, oil and gas operators had provided 3,879 bonds, valued at $162 million, to ensure compliance with lease terms and conditions for 88,357 wells. BLM regulations establish minimum bond amounts: $10,000 for an individual lease, $25,000 to cover all leases of a single operator in a state, and $150,000 to cover all leases of a single operator nationwide. The bond amount for individual leases was set in 1960, while the statewide and nationwide bond amounts were set in 1951. For fiscal years 1988 through 2009, BLM spent about $3.8 million to reclaim 295 orphaned wells in 10 states and has identified an additional 144 orphaned wells in 7 states that need to be reclaimed, according to BLM. The amount spent per reclamation project varied greatly, from a high of $582,829 for a single well in Wyoming in fiscal year 2008 to a low of $300 for 3 wells in Wyoming in fiscal year 1994. BLM reclamation cost estimates were not available for all of the wells it has yet to reclaim, but BLM field office officials have completed reclamation cost estimates of approximately $1.7 million for 102 of the 144 orphaned wells. The 12 western states (Alaska, Arizona, California, Colorado, Idaho, Montana, Nevada, New Mexico, Oregon, Utah, Washington, and Wyoming) and other Interior agencies and offices have bonding approaches that differ from BLM's oil and gas bonding requirements. The states generally require higher bond amounts than the minimum amounts established by BLM regulations for individual and statewide oil and gas leases. Regulations governing the extraction or use of other federally owned resources generally require bond amounts based on the cost of reclamation or use minimum amounts that were established more recently than the bond amounts for oil and gas. GAO provided a draft of this report to the Department of Interior for review and comment. The Department provided technical comments, which were incorporated as appropriate. |
WHO was established in 1948 as the directing and coordinating authority on global health within the UN system. WHO’s stated mission is the attainment by all peoples of the highest possible level of health. WHO experts produce health guidelines and standards and assist countries in addressing public health issues. WHO membership is comprised of 194 countries and associate members that meet every year at the World Health Assembly, WHO’s supreme governing body, to set policy and approve the budget. The work of the World Health Assembly is supported by an Executive Board that meets at least twice a year and is composed of 34 members who are technically qualified in the field of health and who hold 3-year terms. The main functions of the Executive Board are to carry out the decisions and policies of the World Health Assembly, provide advice, and facilitate its work. WHO is headed by the Director-General, who is appointed by the World Health Assembly every 5 years. WHO is staffed by approximately 8,000 health and other experts and support staff, working at WHO headquarters in Geneva, Switzerland; six regional offices; and 147 country offices. Each WHO region has a regional committee comprised of representatives from the region’s member states, which formulates policies and programs and supervises the work of the regional offices. The regional committees also provide input into global policy and program development through regional consultations. WHO country offices support host countries in policy making, capacity building, and knowledge management, among other things, in the public health sector. Figure 1 shows the WHO regions, their program budgets for 2010 through 2011, and staffing levels. WHO’s total program budget for the 2010-2011 biennium was about $4.5 billion, with staff costs of more than 50 percent of its budget. For the 2010-2011 program budget, the portion of assessed contributions was about 21 percent of the total (approximately $900 million), while voluntary contributions accounted for about 79 percent of the total (approximately $3.6 billion). Voluntary contributions have increased from about 69 percent in 2004 to 2005 to 79 percent in 2010 to 2011 (see fig. 2). During the 2010-2011 biennium, the largest annual assessed contributions from member states came from the United States ($219 million), Japan ($135 million), Germany ($77 million), United Kingdom ($62 million), and France ($59 million). While member states are the only entities that provide assessed contributions to WHO’s program budget, voluntary contributions come from a diverse group of more than 400 entities, including member states, foundations, nongovernmental organizations, UN agencies, and private sector companies. During the 2010-2011 biennium, the United States was the largest donor of voluntary contributions to the WHO, followed closely by the Bill and Melinda Gates Foundation. During this time period, the top 10 donors to the WHO provided over two-thirds of its total voluntary contributions (see table 1). According to WHO officials, most of WHO’s voluntary contributions budget is designated by donors for specific diseases and projects. WHO’s financial reporting identifies 13 areas, known as strategic objectives, among which its funding is distributed (see table 2). WHO’s program budget for 2010 through 2011 was about $4.5 billion. More than half of that amount was allocated for communicable diseases, HIV/AIDS, tuberculosis, and malaria, and WHO’s enabling and support functions. In January 2010, the WHO Director-General convened representatives of member states for a high-level consultation on the predictability and flexibility of WHO’s financing, and other global health challenges such as WHO’s changing role in the international health arena and WHO priorities. While discussions to reform WHO initially began with a focus on its lack of predictable and flexible financing and the need for better alignment between its objectives and resources, WHO’s reform efforts have evolved to address more fundamental questions about its priorities, internal governance, role and engagement with other actors in the global health arena, and the managerial reforms needed to make the organization more effective and accountable. In 2010, WHO became concerned with its financial position, particularly due to increased costs resulting, in part, from a decline in the value of the U.S. dollar. In response, the organization implemented several cost-saving measures, such as reducing travel and publications costs. WHO’s financial concerns at the time and the results of two external evaluations of the organization served as additional rationales for WHO to undertake a broad management reform agenda. In May 2011, the World Health Assembly passed a resolution endorsing WHO’s overall direction of reform. The United States is a major participant in WHO’s governing bodies, with HHS, State, USAID, and CDC playing key roles in participating in and representing U.S. interests in WHO. The Secretary of HHS leads the U.S. delegation to the World Health Assembly, and the Director of HHS’s Office of Global Affairs serves as the U.S. Representative to the WHO Executive Board. HHS is responsible for coordinating U.S. government input into the policies and decisions of health-related international organizations, including WHO. Programmatically, HHS collaborates closely with WHO through its agencies and offices, including CDC and the National Institutes of Health (NIH). HHS’s efforts in conjunction with WHO occur in areas such as HIV/AIDS, tuberculosis, mental health, malaria, and polio eradication. HHS also participates in the governing bodies of certain regional offices, including the regional offices for the Americas and the Western Pacific. HHS works closely with State’s Bureau of International Organization Affairs, which has responsibility for issues related to budgets, audits, human resources, and financial management. Preparation for the governing body meetings, such as the World Health Assembly, is a process that includes coordination among HHS, State, USAID and other stakeholders throughout the year in the development of U.S. policy positions and programmatic strategies. In addition to governing body meetings, USUN-Geneva leads day-to-day engagement with WHO officials, with support from HHS, CDC, State, and USAID. There are 35 CDC staff assigned to WHO offices throughout the world, including 9 staff at WHO headquarters in Geneva working in areas such as measles, flu, and polio. According to HHS officials, other U.S. agencies also periodically work with WHO on health issues. For example, the Department of Defense works with WHO on health security and disease detection, and in September 2011, WHO and the U.S. government signed a memorandum of understanding regarding cooperation on global health security initiatives. In addition, USAID has several ongoing grants to WHO, including headquarters and the country and regional offices, in areas such as influenza, malaria, maternal and child health, and HIV/AIDS. The United States has long supported UN reform initiatives and has advocated for comprehensive management reform at UN agencies, including WHO. In 2005, a number of management reforms were introduced to improve transparency and accountability initiatives at the UN. However, other entities in the UN lagged far behind in improving transparency and accountability, according to the U.S. Mission to the UN and officials from State’s Bureau of International Organization Affairs. As a result, in 2007, the United States developed UNTAI to promote more efficiency, effectiveness, transparency, and accountability among UN agencies, including WHO. UNTAI identifies eight goals for which member states can exercise greater oversight and increased transparency and accountability to ensure efficiency and effectiveness. These goals include public access to all relevant documentation related to operations and activities, whistleblower protection policies, financial disclosure programs, an effective ethics office, independence of the respective internal oversight bodies, and adoption of international accounting standards. As part of this initiative, State conducts regular assessments to measure UN agency performance and progress in the eight goals laid out by UNTAI. The assessment presents information concerning the status of each assessed agency against specific benchmarks established by State. These assessments are intended to help the U.S. government identify weaknesses and prioritize engagement at individual UN agencies. In 2011, State established UNTAI phase 2 and revised the UNTAI goals and benchmarks from UNTAI phase 1. UNTAI phase 1 sought to extend reforms already in place at the UN Secretariat to the rest of the UN system, while UNTAI phase 2 was designed to build on UNTAI’s successes and focus on further raising accountability standards for the UN system. For example, UNTAI phase 2 added oversight of procurement because the United States has identified this as a high-risk area. Other changes to the UNTAI assessment tool include enterprise risk management and ethics issues such as nepotism, post-employment restrictions, and conflicts of interest. WHO developed a reform agenda that generally aligns with the challenges identified by stakeholders. In May 2012, member states approved components of WHO’s reform agenda, encompassing three broad areas—priority setting, governance, and management reforms— that generally align with challenges identified by stakeholders. According to WHO officials, member state representatives, and other stakeholders, some of the challenges facing WHO include (1) its lack of clear organizational priorities; (2) lack of predictable and flexible financing; and (3) highly decentralized organizational structure. In developing its reform agenda, WHO consulted with member states, employees, and other parties to gather their views and feedback. In addition, WHO has commissioned three ongoing evaluations to provide input into the reform process. The first stage of one of the planned evaluations, conducted by WHO’s External Auditor and completed in March 2012, concluded that WHO’s reform proposals are generally comprehensive in addressing WHO challenges raised by member states and other stakeholders.continues to consult with member states on priority-setting and governance proposals, which may require extensive deliberation and consensus from member states. In November 2011, the WHO Executive Board approved WHO’s management reform proposals in several areas, and requested further development of proposals in other areas. In May 2012, WHO developed a high-level implementation and monitoring framework that includes reform objectives, selected reform activities, 1- year and 3-year milestones, and intended results. Certain factors could impede WHO’s ability to successfully implement its reform proposals, including the availability of sufficient financial and technical resources and the extent of support from internal and external stakeholders. In May 2012, member states approved components of WHO’s reform agenda that encompass three broad areas—priority-setting, governance, and management. In the area of priority-setting, WHO seeks to focus its efforts and narrow the scope of its work to what it can do best. WHO also seeks to improve member states’ governance of the organization and strengthen its leadership role in the global health arena. Management proposals include efforts to increase WHO’s effectiveness by improving its financing, human resources policies, results-based planning, and accountability and transparency mechanisms. Table 3 outlines WHO’s three areas of reform and some of WHO’s rationales for the reforms in each area. WHO’s reform agenda generally aligns with the challenges identified by WHO officials, member states, and other global health organizations we interviewed. According to WHO officials, member state representatives, and other stakeholders, some of the challenges facing WHO include (1) its lack of clear organizational priorities, (2) lack of predictable and flexible financing, and (3) highly decentralized organizational structure. For example, WHO officials and several global health stakeholders stated that, because most of WHO’s funding comprises voluntary contributions specified for certain activities, WHO’s ability to allocate resources according to its priorities are limited. WHO officials further commented that, while maternal and child health activities and achieving the health- related UN Millennium Development Goals are priorities for the organization, these areas are generally underfunded because donors specify funding for other program areas. In addition, stakeholders stated that WHO’s decentralized organizational structure and autonomous regional offices limit the regional and country offices’ accountability to WHO headquarters and the coherence of WHO’s efforts. WHO took a number of steps to consult with member states, employees, and other parties to gather their views and feedback on its reform agenda. These consultations are in accordance with a WHO Executive Board decision in May 2011 to establish a transparent, member-state- driven, and inclusive process of consultation to support the development of its reform agenda and proposals. Accordingly, in a previous GAO report, we reported that early, frequent, and clear, two-way communication of information with employees and stakeholders is considered a good practice when undergoing a major organizational change because it allows stakeholders to provide input and take Figure 3 provides a timeline of WHO ownership of the change.consultations with internal and external stakeholders on its reform agenda. We previously reported that a successful organizational transformation must involve employees and their representatives from the beginning to promote their ownership of and investment in the changes occurring in the organization. We also identified the use of employee teams comprising a cross-section of individuals who meet to discuss solutions to specific issues related to organizational change as a promising practice. We found that WHO has taken steps to develop and communicate its reform plans with internal stakeholders, including WHO employees at its regional and country offices. Specifically, WHO established a task force on reform consisting of staff members from headquarters, regional offices, and country offices to ensure organization-wide representation. The task force met twice in June and September 2011 and offered their views on WHO’s organizational effectiveness. According to WHO, the task force’s feedback was incorporated in WHO proposals presented for the November 2011 special session on reform. In addition, WHO has a dedicated intranet site for staff to comment on the WHO reform process, and WHO officials conducted six town hall meetings with staff since January 2011 to update them on the progress of reform. WHO used a variety of means to consult with external stakeholders, such as member states, on its reform agenda. As decided at the May 2011 Executive Board session, WHO used private web-based consultations to collect feedback from member states from June through November 2011 and from January through February 2012. WHO also held a 3-day special session of the Executive Board in early November 2011 that was focused on reform. During this session, the WHO Director-General presented WHO’s proposals for reform, based on its consultations with member states, as well as a high-level road map for further development of the proposals. The Executive Board made decisions related to the three areas of reform and identified further work to be carried out by the WHO Secretariat. WHO also formally and informally briefed member state missions on its reform proposals and the progress of its reform plans. For example, WHO regional committee meetings that occurred during the fall of 2011 served as platforms for consultations with their member states. According to WHO officials, because reform has generally been a member state-driven process, WHO consultation with nongovernmental organizations and private industry has been more limited than its engagement with member states. However, WHO invited nongovernmental organizations in “official relations” with WHO to submit their comments on its reform agenda. According to WHO officials, it has also convened three informal dialogues with nongovernmental organizations since late 2011. At the November 2011 special session on reform of the WHO Executive Board, the Board decided to commission three ongoing evaluations to provide input to the reform process. WHO commissioned a two-stage independent evaluation, the first stage of which was conducted by WHO’s External Auditor during February and March of 2012. The first stage of the evaluation consisted of a review of the comprehensiveness and adequacy of WHO’s reform proposals in finance, human resources, and governance. The External Auditor concluded that WHO’s reform proposals were generally comprehensive in addressing concerns raised by member states and other stakeholders. The External Auditor also concluded that WHO followed an inclusive process of deliberations and that it held a wide range of consultations with stakeholders, but that it could have taken additional steps to consult with non member state donors to the organization. The External Auditor recommended that WHO develop plans to prioritize the implementation of its various reform proposals; identify desired outputs, outcomes, and impact; explain the implications of new changes to affected parties; and maintain regular communication with those concerned about the progress of WHO’s reform proposals. Stage two of the evaluation is intended to focus, in particular, on the coherence between and functioning of WHO’s three organizational levels—headquarters, regions, and country offices and build on the results of the stage one evaluation. The second stage of the evaluation is also intended to inform reform discussions at the May 2013 World Health Assembly. In addition, at the request of the WHO Executive Board, the UN Joint Inspection Unit (JIU) is conducting evaluations of WHO’s management and administration practices and of the decentralization of WHO offices.The objectives of the JIU reviews are to (1) assess the management and administration practices in WHO and identify areas for improvement; and (2) assess the degree of decentralization and delegation of authority among the WHO headquarters and the regional and country offices, as well as current coordination mechanisms and interactions among the three levels. The results of the JIU reviews are intended to provide input into WHO’s decisions on reform. JIU aims to present a report covering its two reviews to the WHO Executive Board in January 2013. In May 2012, member states endorsed components of WHO’s reform agenda and requested additional work in certain areas. According to WHO, some of the reform proposals can be implemented relatively quickly while others require more detailed consideration and planning. WHO officials stated that decisions regarding WHO priority-setting and governance are driven by member states and will require their extensive deliberation and consensus. WHO continues to consult with member states on priority-setting and governance proposals, while taking steps to further develop and implement its management reform proposals. According to WHO officials, the organization is trying to identify criteria for establishing its priorities and determine the global health areas it should focus on and where it is best placed to add value. Since WHO’s creation in 1948, many other global health efforts have been initiated; thus, there is a need to ensure that WHO’s work is focused on the areas in which it has a “unique function” and comparative advantage. Accordingly, WHO aims to establish a clear set of priorities to guide its resource allocation processes and results-based planning activities. Over 90 member states convened at a session on priority-setting in February 2012. They reached consensus on the criteria and the categories of work that will serve as guidance for the development of WHO’s priorities, as laid out in its strategic framework and program budget to be approved by the World Health Assembly in May 2013. Agreed-upon criteria for determining WHO’s priorities include current health problems, including the burden of disease at the global, regional, or country levels; the needs of individual countries as articulated in their WHO country strategies; and WHO’s comparative advantage, including its capacity to gather and analyze data in response to current and emerging health issues. WHO has also established five technical categories that will provide the primary structure of its program budget and include (1) communicable diseases; (2) noncommunicable diseases; (3) promoting health through the life course; (4) strengthening of health systems; and (5) preparedness, surveillance, and response. WHO will define priorities in each of these categories. However, according to WHO, even when priorities are identified, there is no guarantee that funding for priority areas will be available in part due to the common practice of specifying voluntary funds for particular activities. WHO has developed proposals for some of its governance reforms; however, other areas will require further development, consultation, and member state consensus. Proposals to improve WHO’s governance are two-fold and entail (1) improving the effectiveness of WHO’s governing bodies, including its Executive Board, World Health Assembly, and regional committees; and (2) strengthening WHO’s leadership role in the global health arena. According to WHO, the Executive Board is currently prevented from fully exercising its oversight and executive role due to the demands it faces in preparing the agenda and work of the World Health Assembly. According to WHO, the number of agenda items before the World Health Assembly has risen over time, and a large number of resolutions have been adopted, some in areas that are not high priorities for global health. To increase the strategic decisionmaking of WHO governing body meetings, WHO proposals include structuring debate around its priorities. WHO proposals for harmonizing the operations of its regional committees include aligning their meeting agendas and connecting their work more closely with that of the Executive Board. WHO also plans to strengthen the oversight role of its committee that reviews program, budget, and administrative issues. Although WHO aims to strengthen its engagement with the many stakeholders directly involved in the global health sector and to improve the coherence of their efforts, it lacks a current proposal on how to achieve these aims. WHO’s constitution describes two of its functions as (1) acting as the directing and coordinating authority on international health work and (2) establishing and maintaining effective collaboration with the UN, specialized agencies, and other global health organizations. Given the growing number of institutions—including foundations, partnerships, civil society organizations, and the private sector—that have a role in influencing global health policy, WHO reports that it is trying to determine how it can engage with a wide range of stakeholders. According to WHO, at the same time, it does not want to undermine its intergovernmental nature or open itself to undue influence by parties with vested interests. In 2011, WHO proposed a forum to explore ways in which the major actors in global health could work more effectively together; however, WHO stakeholders did not support this proposal. WHO’s concept paper proposed the idea of a “World Health Forum,” an informal, multi- stakeholder body composed of representatives of governments, civil society organizations, private sector entities, and other relevant stakeholders. However, according to WHO, feedback from member states on this proposal was generally unsupportive because they did not want to create a forum that could potentially impinge upon the intergovernmental nature of WHO. In addition, some nongovernmental organizations were concerned that the proposed forum would allow private sector interests to influence decision-making in WHO. However, pharmaceutical industry representatives stated that the private sector has an important role to play in public health policy-making decisions. In May 2011, a group of nongovernmental organizations wrote a letter to WHO expressing concerns regarding the role of private bodies in the financing and governance of WHO. The nongovernmental organizations also expressed concern that the WHO reform proposals at the time did not adequately address the issue of how WHO planned to manage potential conflicts of interest for private institutions. According to WHO, more discussion and consultation is necessary to identify how it will strengthen its engagement with external stakeholders. Since WHO set aside its World Health Forum proposal, WHO plans to consult with nongovernmental organizations on how it can effectively interact with them. In May 2012, member states requested that WHO present a draft policy document on its engagement with nongovernmental organizations to the Executive Board in January 2013. WHO also plans to hold a series of structured consultations concerning its relationship with private commercial entities and to develop a draft policy document on its guidelines for interacting with private entities to be presented to the Executive Board in May 2013. WHO concerns in the area of global health governance also include a concern that, in light of the growing expansion of the number of global health initiatives and partnerships, a number of global health organizations have overlapping roles and responsibilities. For example, WHO recognizes a need to delineate the roles and responsibilities between itself; the Global Fund to Fight AIDS, Tuberculosis, and Malaria; and the GAVI Alliance, particularly in the area of providing technical assistance at the country level. WHO is involved in several formally structured partnerships, some hosted by WHO and others by independent entities that include WHO as part of their governing bodies. WHO reports that it aims to strengthen the Executive Board’s oversight over its partnerships. Management reforms encompass a broad range of areas, including efforts to (1) improve the predictability and flexibility of WHO’s financing; (2) improve its human resource policies and practices; (3) strengthen WHO’s results-based management, accountability, and transparency systems. According to WHO, the provision of stronger and more effective support to countries is a key outcome of its management reforms. At the November 2011 special session of the Executive Board, the Board approved WHO’s management reform proposals in several areas and requested the development of proposals in other areas. To improve the predictability and flexibility of its financing, WHO proposed setting up a dialogue with donors after the approval of its program budget by the World Health Assembly, followed by a financing dialogue in which donors publicly make funding commitments that are aligned with the budget. To improve its human resource policies and practices, WHO proposed the development of a revised workforce model and contract types; streamlined recruitment and selection processes; improved performance management processes; a staff mobility and rotation framework; and enhanced staff development and learning opportunities. To strengthen WHO’s accountability, and transparency systems, WHO proposed a strengthened internal control framework and conflict of interest policy; increased capacity of its audit and oversight office; improved monitoring and reporting; and the establishment of an information disclosure policy and an ethics office. WHO has begun implementing some of its management reform proposals. For example, according to WHO, it took steps to strengthen the staffing of its internal audit and oversight office and developed a draft formal evaluation policy for consideration and approval by the WHO Executive Board. According to WHO officials, although member states approved the implementation of many WHO management reform proposals, they requested that WHO further develop its proposals to increase the flexibility and transparency of WHO financing and present its proposals to the Executive Board in January 2013. Multiple challenges could affect the success of WHO reform implementation. WHO developed a high-level implementation and monitoring framework that included reform objectives, selected activities, 1-year and 3-year milestones, and intended results for consideration by the May 2012 World Health Assembly. For example, to improve WHO’s human resources practices, WHO set a 1-year milestone of conducting regular reviews of its staffing levels and a 3-year milestone of comprehensively integrating its human resources planning into its program planning and budgeting processes. WHO’s intended result for these efforts is staffing that is more closely matched to needs at all levels of the organization. We previously reported that, when undergoing an organizational change, it is important to establish implementation goals, a timeline, estimated costs for achieving the goals, and performance measures—all of which help build momentum and monitor progress. While the framework contains some of these elements, WHO has not yet identified the estimated costs for the implementation of its reform program or defined performance measures, which would serve as an objective means by which to track the organization’s progress in achieving its reform objectives. WHO officials have noted that they are currently developing an implementation plan that will include input from its member states and regional and country offices. Officials also noted that the components of its reform agenda will be implemented at various stages, and that as its reform efforts proceed, WHO will provide periodic updates on its progress to its governing bodies. Key challenges that could impede WHO’s ability to successfully implement its reform proposals include the following: Availability of sufficient resources. According to WHO officials, implementation of its reform proposals will require financial and technical resources, and some of its reform proposals have significant resource implications, which must be carefully considered. Extent of support from internal and external WHO stakeholders. Changes to WHO’s established structures and processes will require support and commitment from WHO’s internal and external stakeholders. Stakeholders raised concerns that, due to the autonomous nature of WHO’s regional offices, WHO’s reform proposals might not be implemented uniformly across the entire organization. In addition, WHO proposals to increase delegation of authority and strengthen its country offices will require the support of WHO’s regional governing bodies and offices. WHO will require the support and consensus of member states to carry its reform proposals forward. The United States has provided input into WHO’s reform agenda, particularly in the areas of transparency and accountability, but State’s tool to assess the progress of management reforms could be enhanced. On priority-setting, the United States has advocated for WHO to maintain its focus on certain functions such as setting norms and standards for international health. On consultations on governance, the U.S. delegation has commented on a range of proposals put forth by WHO, including those on engagement with other global health stakeholders. On management reforms, the United States has supported increased transparency and accountability mechanisms at WHO; however, State’s tool for monitoring progress in this area could be enhanced. In priority-setting consultations, the U.S. delegation has advocated for WHO to maintain its focus on normative functions such as setting standards and guidelines, as well as other areas such as health security and communicable diseases. According to talking points used in preparation for governing body meetings, the U.S. delegation has stressed the need for WHO to remain focused on its core functions of setting standards and guidelines for global health. HHS officials noted that one of the main challenges facing WHO is the development of a narrower set of clear priorities and the need to focus on areas where it has a strategic advantage. According to State and HHS officials, the United States advocated that WHO maintain its focus on normative- setting functions such as setting norms and standards for international health. HHS officials stated that WHO is uniquely positioned to be the international authoritative body for establishing rules and technical standards and conducting monitoring activities. For example, WHO is the major international counterpart for CDC on outbreak control and identifying potential global health threats. Officials from State and USUN- Geneva also stated that U.S. priorities for WHO are focused on its normative functions of setting standards and guidelines. For example, State officials noted that the U.S. government wants WHO to focus on its processes to ensure safe medicines and vaccines, including WHO’s drug prequalification process and essential medicines list. These U.S. officials stated that WHO’s main mission should be to remain the international authority for global health on norms and standards. The U.S. delegation also advocated for a number of other health priorities for WHO, including improving health security and preventing communicable diseases. According to talking points used in preparation for governing body meetings, the U.S. delegation highlighted the importance of including health security and communicable diseases among WHO’s priorities. In addition, State and USUN-Geneva officials cited health security as a key U.S. priority for WHO. State officials noted that U.S. priorities for global health involve protecting the health of Americans at home and abroad; the health security functions of WHO are thus important for achieving this goal. An official from USUN-Geneva noted that health security involves a number of components such as enhancing pandemic preparedness, setting international health norms, and eradicating certain diseases such as small pox, and that WHO is in a unique position to provide leadership in these areas. HHS and State officials also stated that WHO is a critical partner with the United States in fighting communicable diseases such as polio and influenza. A State budget document stated that the U.S. benefits from WHO-sponsored cooperation on vital aspects of global health security, including containing the HIV/AIDS pandemic, preventing the spread of avian influenza and other emerging diseases, and addressing long-term threats to health such as bioterrorism and the spread of chronic diseases. The United States has provided input on a range of WHO proposals in the governance area, according to a U.S. government document used in preparation for governing body meetings. For example, the U.S. delegation supported WHO proposals to improve engagement between WHO and outside stakeholders, such as other global health organizations. In addition, the United States commented on WHO proposals related to the frequency of governing body meetings and the linkages between regional and global policies and strategies. Specifically, the United States favors having the regions adapt global policies and strategies, rather than repeating the process of policy and strategy development at the regional level. In governance consultations, the U.S. delegation also pushed for a greater effort to define WHO’s strategic engagement in partnerships and the degree to which the partnerships meet WHO’s interests. The United States has supported an agenda for greater transparency and accountability for WHO management reforms. According to State officials, State’s Bureau of International Organization Affairs takes the lead for the U.S. government on issues related to management reform and is responsible for pursuing management reforms throughout the UN system, including WHO. U.S. officials mentioned a number of U.S. goals in this area, including improving internal and external oversight mechanisms, budgeting and planning processes, and human resources and administrative reforms. According to State officials, cost effectiveness, efficiency, accountability, and monitoring and evaluation are key U.S. priorities for WHO reform. The U.S. delegation has taken steps to advocate for a number of reforms to improve WHO’s internal and external oversight mechanisms. According to State officials, the United States encouraged the reestablishment of an independent audit committee for WHO. The previous audit committee was disbanded in 2005 amid concerns about its effectiveness, and a revamped audit committee was established in 2009. Officials also noted that State supports WHO in establishing a dedicated ethics office, which is currently under consideration as part of the proposed reforms. For example, according to WHO officials, the U.S. delegation introduced a proposal that would require the newly formed ethics office to report directly to the Program Budget and Administration Committee, thereby enhancing the independence of the office. In addition, according to a USUN-Geneva official, the United States pushed for improved independent evaluation at WHO, and WHO agreed in November 2011 to conduct an independent evaluation as an input into the reform process. According to officials from USUN-Geneva, two additional management- related goals for the United States include improvements in the budgeting and planning process and human resources and administrative reforms. Specifically, the United States has emphasized that WHO makes the necessary changes to its budgeting and planning system to ensure that WHO resources are aligned with its stated objectives. For example, according to WHO officials, the U.S. delegation offered an amendment at the May 2012 Executive Board meeting to hold a special meeting of the Program Budget and Administration Committee in late 2012 in order to discuss WHO financing as well as other reform issues. The U.S. delegation also has advocated for human resources and personnel reforms to ensure that WHO staff have the appropriate skill set for the organization’s current needs. In particular, according to talking points prepared for governing body meetings, the United States pushed for a new workforce model to distinguish long-term functions from time-limited projects and for a skills profile of staff at each level of the organization as a way to improve the organization’s effectiveness and flexibility. Officials from USUN-Geneva have met with officials from the WHO human resource office to advocate for reforms in this area. The United States also advocated to harmonize recruitment policies, increase the speed of hiring, improve performance management processes, and enhance staff development and learning. USUN-Geneva officials noted that WHO is taking steps to respond to the concerns and proposals raised by the United States and other member states as part of the reform agenda. State established an assessment tool to measure progress on transparency and accountability mechanisms, a tool that could assist in monitoring the progress of management reforms. State’s UNTAI tool is used to assess approximately 20 UN agencies, including WHO, to monitor progress on eight goals related to transparency and accountability, with a number of specific benchmarks in each category. For example, the UNTAI goal “effective oversight arrangements” contains six benchmarks, including whether the external audit reports are publicly available online and if there are term limits for the external auditor. According to State officials, the UNTAI tool is not intended to cover the full range of U.S. goals and priorities in the area of management reform. For instance, the assessment tool does not cover certain U.S. priorities such as human resources and personnel systems, which is another key component of management reform. UNTAI is a useful tool for guiding U.S. priorities and engagement on certain management issues. According to State officials, State assigned WHO “above average” scores on UNTAI criteria relative to other UN organizations, and the assessment identified certain areas for improvement. State’s UNTAI assessment scored WHO well in areas related to whistleblower protection and conflicts-of-interest policies. However, the WHO UNTAI assessment identified areas for improvement in certain areas, such as maintaining an independent ethics function. According to State officials, as a result of the goals laid out in UNTAI, the U.S. delegation pushed for the establishment of an independent audit committee at WHO. A USUN-Geneva official noted that the UNTAI assessment is used to guide U.S. priorities and engagement on issues related to transparency and accountability and to sharpen the U.S. position in these areas. To conduct the UNTAI assessment, officials can use a number of strategies, according to State officials. State officials at the mission carry out the assessments, either by completing the tool themselves, or providing it to the UN organization to complete. For example, State officials at the mission can collect information to complete the assessment by interviewing officials from the UN agency, such as representatives of an ethics or management office. In some cases, the UNTAI assessment tool is provided to UN agency representatives as a self-assessment exercise. According to State officials, the mission vets the completed assessments and sends them to Washington for review. For example, the most recent UNTAI assessment for WHO, covering fiscal year 2011, was completed by WHO representatives and verified by officials from USUN-Geneva and State in Washington, D.C. According to State officials, State has provided some general guidelines for completing UNTAI assessments to State officials at the mission in addition to providing technical and agency-specific advice on an as- needed basis. State provided information on the UNTAI goals and benchmarks through cables to officials in the field in 2008 and 2011. State officials noted that questions about the assessment tool are answered through correspondence between the missions and State in Washington. In addition, some State officials at the mission choose to provide additional information with the assessment; however, State does not require that supporting documentation accompany the assessments. State officials at the mission completing the assessments are asked to defend the assigned ratings to State officials in Washington and make an evidence-based case for the assigned scores. According to State officials, State consulted with officials in the field to develop the assessment tool and such a consultative process helped to facilitate a shared understanding among those completing and reviewing the assessments. State officials also noted that the process of reviewing the UNTAI reports in Washington helps to minimize errors, omissions, and inconsistencies, but that this process does not fully mitigate risks to data reliability. State officials mentioned that they are considering distributing a list of frequently asked questions to officers in the field to aid in completing the assessment in the upcoming fiscal year. In addition, State officials we spoke with stated that the UNTAI tool was updated for 2011 and that they recognize that areas for improvement and clarification may still exist, as they often do with surveys and data collection instruments. An official at USUN-Geneva welcomed improved guidance noting that this would assist officials in the field in completing the assessment tool. We found some weaknesses in State’s UNTAI assessment of WHO, including an ambiguous rationale for State’s scores on certain benchmarks. In reviewing State’s WHO UNTAI assessment, we could not find support for State’s scoring on 14 of 50 benchmarks. For example, we could not find support for State’s determination that WHO’s evaluation and management functions are autonomous. The comments submitted with the UNTAI assessment stated that most evaluation is decentralized and commissioned under individual technical areas. Therefore, the evaluation function is not functionally separate at an organizational level from those responsible for the design and implementation of the programs and operations evaluated, as specified in the UNTAI benchmarks. In addition, State’s WHO UNTAI assessment concluded that WHO consistently and objectively applied its policy on program support costs, which was approved by the member states; however, this policy does not appear to be consistently applied. The program support cost policy requires that 13 percent of all voluntary funding contributions are allocated to reimburse WHO for administering projects of voluntarily funded programs. However, according to WHO officials, many donors negotiate a program support charge averaging around 7 percent, rather than the standard rate of 13 percent, for their voluntary contributions. We also found that State’s definitions of certain benchmarks used in State’s UNTAI tool were unclear and may lead to data reliability concerns. We analyzed State’s UNTAI tool to assess whether the tool is likely to gather accurate and consistent data. We found that 15 of 50 benchmarks in the UNTAI assessment tool required the judgment of the reviewer, due to the subject matter expertise required to complete the assessment, the lack of clarity on the benchmark definitions, or both. Certain benchmarks require an understanding of specific subject areas to accurately determine whether the benchmark has been met, and not all State officials completing the assessments have the required expertise in each area to make an accurate judgment. For example, the benchmark indicating whether or not the organization has an independent, transparent, effective, and fair bid protest process requires some knowledge related to acquisition and procurement rules to make such a determination. In addition, certain benchmarks use ambiguous or indefinite terminology, requiring the assessor to define the meaning of the terms before they can assess whether the benchmark has been met. For example, the determination of whether the organization has adequate staff and financial resources allocated to the evaluation function requires some judgment about the definition of adequate in this context. The UNTAI tool does not provide sufficient guidance to reviewers to assist in making these judgments and does not require documentation from the assessor to explain how such a judgment was made. See appendix II for further information on GAO’s analysis of the benchmarks in State’s UNTAI assessment tool. WHO has undertaken an ambitious and comprehensive agenda for reform; however, as with other organizations undergoing major transformational change such as broad reforms, WHO faces potential challenges throughout implementation. WHO’s high-level implementation and monitoring framework includes important elements for planning organizational change, such as reform objectives, 1-year and 3-year milestones, and intended results. In addition, WHO is currently developing a detailed implementation plan, which would help WHO achieve its goals, including the creation of performance indicators to measure progress and identification of the estimated costs for implementing its broad reform agenda. Thus, success of WHO reform depends on the ability of WHO to sustain its efforts to establish such a comprehensive reform implementation plan, as well as other essential elements including consensus from member states and other internal and external stakeholders. The U.S. delegation has participated in numerous consultations on WHO reform and has been supportive of reforms to improve the efficiency and effectiveness of the organization. The United States has been particularly supportive of WHO’s focus on its core functions of setting standards and guidelines, as well as a set of reforms improving the transparency and accountability mechanisms of the organization. State’s UNTAI assessment is a useful tool for shaping U.S. engagement with WHO and monitoring WHO progress in implementing certain management reforms related to UNTAI goals and benchmarks. However, there are weaknesses in the UNTAI assessment tool that generate concerns over the reliability of the information generated in these assessments, including the ambiguous rationale for State’s scores in particular areas and the lack of clarity in the definitions of certain benchmarks. Therefore, ensuring that the performance information resulting from the UNTAI assessment is useful and accurate is crucial for State’s ability to continue advocating for improvements at WHO and monitor WHO reform implementation in certain areas of management reform. To improve U.S. assessment of WHO reform, we recommend that the Secretary of State enhance its guidance on completing State’s assessment tool for monitoring WHO’s progress in implementing transparency and accountability reforms by including, for example, a requirement to collect and submit supporting documentation with completed assessments. We requested comments on a draft of this report from the Departments of State and HHS, USAID, and WHO. State and WHO provided written comments that are reprinted in appendixes III and IV of this report. HHS and USAID did not provide written comments on this report. State generally endorsed the main findings and conclusions of our report and concurred that WHO has undertaken an ambitious and comprehensive agenda for reform. State also agreed that the United States has advocated for and provided input into WHO’s reform agenda, particularly in the areas of management, budgeting and planning, priority setting, governance, and financing. State agreed that its process for conducting its UNTAI assessment could be strengthened and accepted our recommendation to revise its guidance for completing these assessments. State noted that it is in the process of updating its assessment tool and plans to issue expanded guidance prior to the fiscal year 2012 ratings. State also clarified the context regarding its assessments. State noted that we overstated the need for subject matter expertise in determining whether some benchmarks on the UNTAI assessment tool have been met. We recognize that some officers in the field completing the assessment may benefit from the expertise of those in the Bureau of International Organization Affairs. However, we maintain that the UNTAI tool does not provide sufficient guidance to reviewers to assist in making these judgments and that this could lead to potential data reliability concerns. Furthermore, according to an official at USUN-Geneva, improved guidance would be welcome and would help officials in the field complete the assessment tool. In addition, State mentioned the need to balance the requirement for supporting documentation with the need to minimize the reporting burden on missions, WHO, and other UN organizations. We recognize State’s concern about overburdening missions with reporting requirements and maintain that revised guidance would benefit both the missions and officers in Washington in preparing and reviewing these assessments. In its comments, WHO concurred with the main conclusions of our report and stated that our review provides an important framework against which WHO and its member states can evaluate the reform’s direction. WHO agreed that the reform proposals respond to the challenges identified by stakeholders, and that the consultation process has been inclusive and transparent. In addition, WHO noted that our conclusions broadly converge with those of the evaluation conducted by WHO’s External Auditor. WHO also recognized that the development of a detailed implementation plan will be critical to ensure successful institutional change. State, HHS, USAID, and WHO also provided technical comments that we have incorporated into this report, as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to interested congressional committees, the Secretaries of State and HHS, the Administrator of USAID, the U.S. Permanent Representative to the UN in Geneva, the Director-General of WHO, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9601 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. This report examines (1) the steps the World Health Organization (WHO) has taken to develop and implement a reform agenda that aligns with the challenges identified by the organization, its member states, and other stakeholders; and (2) the input the United States has provided to WHO reforms. To assess the steps that WHO has taken to develop and implement a reform agenda that aligns with the challenges identified by the organization, its member states, and other stakeholders, we conducted interviews in Washington, D.C., and in Geneva, Switzerland, with WHO officials, representatives of member states to the WHO, and a range of WHO stakeholders. We obtained their views on the challenges WHO faces and whether these challenges align with those addressed in WHO’s reform agenda. We also solicited their views on the steps WHO has taken to consult with internal and external stakeholders in developing and implementing its reform agenda. We interviewed WHO officials based in its headquarters office, six regional offices, and five country offices, including representatives of WHO’s reform team, task force on reform, and headquarters staff association. We interviewed officials from the U.S. Departments of State (State), Health and Human Services (HHS), Centers for Disease Control and Prevention (CDC), U.S. Agency for International Development (USAID), and officials representing 15 other member states to WHO. We interviewed representatives from institutions such as the Global Fund to Fight AIDS, Tuberculosis, and Malaria and the GAVI Alliance; nongovernmental organizations, such as Doctors without Borders and the Institute of Medicine; and the Bill & Melinda Gates Foundation, one of the largest donors to the WHO. We also met with representatives of UN agencies, such as UNAIDS and the United Nations Development Program; private sector entities, including U.S. and international pharmaceutical research associations; and two research centers that review global health issues. In addition, we reviewed WHO documents on its reform agenda and process, including its evaluation plans and its implementation and monitoring framework for reform. To examine U.S. support for WHO reforms, we met with officials from State, HHS, CDC, and USAID. We also conducted field work in Geneva, Switzerland, to meet with officials from USUN-Geneva, WHO, and other member state missions to learn about U.S. participation in WHO reform discussions and collaboration with other WHO member states. We collected and reviewed relevant U.S. government documents, including budget documents, strategies, position papers, talking points, and speeches. Based on interviews with U.S. government officials and U.S. government documents, we conducted an analysis to identify possible U.S. government priorities for WHO reform. We also collected and analyzed data from State, HHS, CDC, and USAID on U.S. funding contributions to WHO. We determined that these data were sufficiently reliable for the purposes of presenting specific agency contributions to WHO. To examine State’s United Nations Transparency and Accountability Initiative (UNTAI) tool to measure the performance and progress of UN agencies, including WHO, on transparency and accountability, we interviewed State officials at State’s Bureau of International Organization Affairs, which developed and uses the assessment tool. To examine the results of State’s assessment of WHO using the UNTAI tool, we interviewed officials at USUN-Geneva who are involved in completing the assessment of WHO. We also systematically reviewed State’s WHO UNTAI report to verify the basis for State’s determinations on each benchmark. Specifically, we examined State’s assigned score for each benchmark against the information WHO provided, noting benchmarks where the support for State’s determination was not clear. In addition, we reviewed the specific benchmarks used in State’s UNTAI tool to determine potential threats to the accuracy and consistency of the resulting assessments. To do so, we developed definitions of the types of judgment necessary to implement the tools, and two analysts independently applied those definitions to each benchmark. They then met to compare and resolve any differences. The two analysts agreed upon resolutions until there was 100 percent agreement on the coding. Finally, we met with officials from State’s Bureau of International Organization Affairs about the results of our review of the benchmarks and our analysis of WHO’s assessment results. We conducted this performance audit from August 2011 to July 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We performed a review of State’s United Nations Transparency and Accountability Initiative (UNTAI) assessment tool to better understand its potential usefulness for supporting State’s monitoring of management reforms. The usefulness of the data collected by the tool is affected by the degree to which the resulting data are complete and accurate, which requires that the data gathered are clear and well defined enough to attain consistent results. We reviewed the specific benchmarks used in State’s UNTAI tool to determine potential risks to the accuracy and consistency of the resulting assessments. In conducting our analysis, we developed a methodology for determining if the benchmarks in the assessment were clear and sufficiently defined to yield similar results when applied by different individuals. We found that the largest area of concern resulted from the judgment required when evaluating benchmarks. (A full description of our coding methodology and analysis can be found in app. I). We identified the following two types of judgment necessary to implement the tool for 15 UNTAI benchmarks: 1. Subject matter expertise - Benchmarks that require an understanding of a specific area of knowledge to make an accurate determination. These are benchmarks for which professional judgment is necessary to accurately determine if the benchmark has been met. For example, one benchmark related to the training and qualification of procurement officials would require subject matter expertise in procurement to understand what qualifications or training might be appropriate for procurement professionals. 2. Definitional judgment - Benchmarks that require a determination of scope, size, or meaning. These are benchmarks in which ambiguous terminology or imprecise terms are used, which the assessor must define to assess whether the benchmark has been met. For example, this benchmark would require definitional judgment to determine if the level of qualifications and training would make an individual “qualified and trained.” Definitional judgment would also be needed to determine the proportion of the total number of professionals who must be “qualified and trained” for the agency to meet that benchmark. We determined that 35 of the 50 benchmarks in UNTAI (70 percent) require neither subject matter expertise nor definitional judgment. Of the remaining 15 benchmarks, 5 (10 percent) require both definitional judgment and subject matter expertise to be assessed, 4 (8 percent) require subject matter expertise, and 6 (12 percent) require definitional judgment, which may affect the accuracy and consistency of the results for those benchmarks. Of the nine benchmarks where subject matter expertise was required, we found knowledge would be needed in five relevant areas to complete the assessment: training and development, acquisitions and procurement, UN policies and practices, auditing and evaluation, and accounting standards. The accuracy and consistency of the individual determinations will depend, in part, on the assessors’ expertise in these five areas, and on their definitional judgment relative to other assessors. For example, one benchmark asks whether “funding arrangements facilitate effective and independent evaluations of the organization’s activities.” This benchmark requires subject matter expertise related to auditing and evaluation and definitional judgment about effectiveness to accurately assess the relevant UN agency. This judgment creates the potential that two assessors with different levels of subject matter expertise and who apply different definitional judgments could rate the same program differently. The potential variation in judgment and knowledge of the assessor could make the overall score of the UN agency vary from 2 to 5 points on UNTAI’s 5-point scale. Guidance on how to assess each of these benchmarks would serve to mitigate the need for judgment and reduce the risk of inconsistency in the assessments. 1. We recognize that some officers in the field completing the assessment may benefit from the expertise of those in the Bureau of International Organization Affairs. However, we maintain that the UNTAI tool does not provide sufficient guidance to reviewers to assist in making these judgments and that this could lead to potential data reliability concerns. Furthermore, according to an official at USUN-Geneva, improved guidance would be welcome and would help officials in the field complete the assessment tool. 2. We recognize State’s concern about overburdening missions with reporting requirements and maintain that revised guidance would benefit both the missions and officers in Washington in preparing and reviewing these assessments. In addition to the contact named above, Joy Labez (Assistant Director), Diana Blumenfeld, Debbie Chung, Lynn Cothern, Karen Deans, Mark Dowling, Etana Finkler, Emily Gupta, Steven Putansu, Jena Sinkfield, R.G. Steinman, Teresa Tucker, and Sarah Veale made key contributions to this report. Gifford Howland and Kara Marshall provided additional technical assistance. United Nations: Improved Reporting and Member States’ Consensus Needed for Food and Agriculture Organization’s Reform Plan. GAO-11-922. Washington, D.C.: September 29, 2011. UN Internal Oversight: Progress Made on Independence and Staffing Issues, but Further Actions Are Needed. GAO-11-871. Washington, D.C.: September 20, 2011. United Nations: Management Reforms and Operational Issues. GAO-08-246T. Washington, D.C.: January 24, 2008. United Nations: Progress on Management Reform Efforts Has Varied. GAO-08-84. Washington, D.C.: November 14, 2007. United Nations Organizations: Oversight and Accountability Could Be Strengthened by Further Instituting International Best Practices. GAO-07-597. Washington, D.C.: June 18, 2007. United Nations: Management Reforms Progressing Slowly with Many Awaiting General Assembly Review. GAO-07-14. Washington, D.C.: October 5, 2006. United Nations: Weaknesses in Internal Oversight and Procurement Could Affect the Effective Implementation of the Planned Renovation. GAO-06-877T. Washington, D.C.: June 20, 2006. United Nations: Oil for Food Program Provides Lessons for Future Sanctions and Ongoing Reform. GAO-06-711T. Washington, D.C.: May 2, 2006. United Nations: Internal Oversight and Procurement Controls and Processes Need Strengthening. GAO-06-710T. Washington, D.C.: April 27, 2006. United Nations: Funding Arrangements Impede Independence of Internal Auditors. GAO-06-575. Washington, D.C.: April 25, 2006. United Nations: Lessons Learned from Oil for Food Program Indicate the Need to Strengthen UN Internal Controls and Oversight. GAO-06-330. Washington, D.C.: April 25, 2006. United Nations: Procurement Internal Controls Are Weak. GAO-06-577. Washington, D.C.: April 25, 2006. United Nations: Preliminary Observations on Internal Oversight and Procurement Practices. GAO-06-226T. Washington, D.C.: October 31, 2005. United Nations: Sustained Oversight Is Needed for Reforms to Achieve Lasting Results. GAO-05-392T. Washington, D.C.: March 2, 2005. United Nations: Reforms Progressing, but Comprehensive Assessments Needed to Measure Impact. GAO-04-339. Washington, D.C.; February 13, 2004. United Nations: Reform Initiatives Have Strengthened Operations, but Overall Objectives Have Not Yet Been Met. GAO/NSIAD-00-150, Washington, D.C.; May 10, 2000. | WHO is the directing and coordinating authority for global health within the United Nations (UN) system. In 2012, member states approved a reform agenda addressing three areas: (1) priority-setting, to refocus its efforts and establish a process to determine priorities; (2) governance, to improve the effectiveness of its governing bodies and strengthen engagement with other stakeholders; and (3) management, including human resources, results-based planning, and accountability. The United States is a key participant in WHO's governing bodies and the largest donor, contributing about $219 million, or 22 percent, to WHO's assessed budget for 2010 and 2011, and more than $475 million, or about 16 percent, to WHO's voluntary budget. As the largest financial contributor to the UN, the United States has advocated for comprehensive management reform throughout the UN system, including WHO. This report examines (1) the steps WHO has taken to develop and implement a reform agenda that aligns with the challenges identified by stakeholders and (2) the input the United States has provided to WHO reforms. GAO analyzed WHO and U.S. government documents and interviewed officials and stakeholders in Washington, D.C., and Geneva, Switzerland. In May 2012, 194 member states approved components of the World Health Organization's (WHO) reform agenda, encompassing three broad areas--priority-setting, governance, and management reforms--that generally address the challenges identified by stakeholders. According to WHO officials, member state representatives, and other stakeholders, some of the challenges facing WHO include its (1) lack of clear organizational priorities; (2) lack of predictable and flexible financing; and (3) highly decentralized organizational structure. In developing its reform agenda, WHO consulted with member states, employees, and other parties to gather their views and feedback. In addition, WHO has commissioned three ongoing evaluations to provide input into the reform process. The first stage of one of the planned evaluations was conducted by WHO's External Auditor, which concluded in March 2012 that WHO's reform proposals are comprehensive in addressing challenges faced by the organization. WHO continues to consult with member states on priority-setting and governance proposals, which generally require extensive deliberation and consensus from member states. In November 2011, the WHO Executive Board approved WHO's management reform proposals in several areas, and requested further development of proposals in other areas. In May 2012, WHO developed a high-level implementation and monitoring framework that includes reform objectives, selected activities, 1-year and 3-year milestones, and intended impacts. Certain factors could impede WHO's ability to successfully implement its reform proposals, including the availability of sufficient financial and technical resources and the level of sustained support from internal and external stakeholders. The United States has provided input into WHO's reform agenda, particularly in the areas of transparency and accountability, but the Department of State's (State) tool for assessing progress in the area of management reform could be enhanced. On priority-setting, the United States has advocated for WHO to maintain its focus on certain functions such as setting regulations and standards for international health. In consultations on governance, the U.S. delegation to WHO has commented on a range of proposals WHO has put forth, including those on engagement with other global health stakeholders. On management reforms, the United States has supported an agenda for greater transparency and accountability. The U.S. delegation has advocated for a number of reforms to improve WHO's internal and external oversight mechanisms and supported reforms in budgeting, planning, and human resources. Additionally, State has established an assessment tool to measure progress on transparency and accountability mechanisms, which is a useful tool for guiding U.S. priorities and engagement with WHO, and could be helpful for monitoring WHO's progress in implementing certain management reforms. However, we found weaknesses in State's assessment tool, including an unclear basis for State's determinations on certain elements in its assessment of WHO, as well as a lack of clarity in the definitions used in the assessment. According to State officials, State provides guidance to officials completing these assessments but acknowledged that the process does not fully mitigate risks to data reliability. GAO recommends that the Secretary of State enhance State's guidance on completing its assessment tool for monitoring WHO's progress in implementing transparency and accountability reforms. State generally concurred with GAO's recommendation. |
Federal agencies record their budget spending authority in fund accounts called Fund Balance with Treasury (FBWT), and increase or decrease these accounts as they collect or disburse funds. In the federal government, an agency’s FBWT account is the closest thing an agency has to a corporate bank account. The difference is that instead of a cash balance, FBWT represents unexpended spending authority in appropriations. In enacting appropriations, Congress authorizes agencies to spend from the various FBWT accounts to meet their missions. These fund accounts serve as a control mechanism to help ensure that agencies’ disbursements do not exceed the appropriated amounts. Reconciling FBWT activity is an important internal control in ensuring that all receipt and disbursement transactions have been recorded in the accounting records of government agencies. Reconciling agency FBWT activity records with Treasury activity records is important to establish the completeness of transactions reported and can be used to determine unexpended fund balances. Reconciliation is a necessary step in achieving funds control. A reconciliation consists of comparing two or more sets of records, researching and resolving any differences, and recording adjustments if necessary. Reconciliations are to be performed routinely so that any problems are detected and corrected promptly and differences are not allowed to age, thereby becoming increasingly difficult to research. DFAS, a component of DOD, has responsibility for providing finance and accounting services to all other DOD components, including the Air Force, Army, Navy, and Marine Corps. DFAS’ headquarters unit and five DFAS centers are responsible for accounting, disbursing, collecting, and financial reporting for DOD components. DFAS Denver, with support from its field locations, is specifically responsible for Air Force accounting functions. Air Force and other components’ personnel are responsible for funds control and purchasing the goods and services necessary to meet their missions. The Office of the Under Secretary of Defense (Comptroller) issues the DOD Financial Management Regulation containing DOD’s policies and procedures in the area of financial management. The DFAS centers and their field locations process cash, interagency, and intra-DOD transactions based on requests from military service personnel. Cash transactions primarily consist of paper checks issued, electronic funds transfers, and deposits. Interagency and intra-DOD transactions are primarily transfers of funds between federal entities and do not involve cash; however, they affect the FBWT accounts the same way cash transactions do. DFAS increases or decreases DOD’s individual FBWT account balances during the year as funds are collected or disbursed. DFAS is responsible for maintaining transaction-level details and a record of the unexpended balance for each of DOD’s appropriation accounts. Treasury also maintains accounting information on the Air Force’s and other federal agencies’ FBWT activity to prepare governmentwide financial reports. In an effort to ensure the integrity of these reports, Treasury directs agencies to reconcile their reported FBWT activity on a regular and recurring basis. Many disbursements from Air Force General Funds are made and reported to Treasury by other DOD services and federal agencies in accordance with pre-arranged agreements. These other DOD components and agencies process disbursements from Air Force General Funds for obligations that were established by Air Force personnel responsible for buying goods and services and then transmit information on their disbursements for Air Force to Treasury and separately to DFAS Denver. Federal agencies and the other DOD components disburse the funds first, and DFAS Denver field locations receive the detailed accounting transaction data from them later. This process is different from both normal bookkeeping operations in the private sector and keeping one’s personal checkbook. This DOD system is similar to having more than one person writing checks on the same bank account, which would create uncertainty in knowing the balance in the account. Increasing the difficulty in knowing the balance is DOD’s long-standing problem of not having integrated accounting systems, which routinely causes accounting data to be processed at different times. The following example illustrates the interagency disbursement system. Assume the Air Force authorizes the State Department to disburse Air Force funds. The State Department pays a bill for the Air Force and sends the information to Treasury. Treasury then subtracts the funds from the Air Force’s FBWT. Treasury reports the disbursement to DFAS Denver. However, DFAS Denver cannot record the related expense transaction or subtract the already disbursed funds from the FBWT account balance on its books until it receives sufficient details from the State Department. These details can come after the month-end Treasury report. When DFAS Denver receives the transaction data from the State Department, DFAS Denver sends the information to the Air Force field activity that authorized the disbursement. The Air Force field activity matches the disbursement to the original obligation and records the transaction. Each month, DFAS Denver compares the activity recorded in the Air Force FBWT account to the activity reported in the account by Treasury. Because multiple federal agencies and other DOD components can affect the Air Force’s fund balance accounts at Treasury, DFAS Denver’s recorded transaction activity routinely differs from Treasury’s, creating reconciling items at any point in time. These multiple participants in the disbursing and collecting process make the reconciliation process more complex than reconciling one’s personal checkbook. The reconciliation process consists of two parts. First, Treasury compares agency-reported receipts and disbursements to amounts reported by the Federal Reserve or commercial banking system. Treasury then provides agencies the details of any identified discrepancies in monthly comparison reports. Each agency is responsible for researching the differences between its and Treasury’s records. Once differences are resolved, agencies record any necessary adjustments to their FBWT accounts and report these adjustments to Treasury. To correct bank errors, agencies contact the bank or Treasury for assistance. Figure 1 summarizes this first part of the FBWT reconciliation process. For the second part of the reconciliation process, DFAS Denver compares the disbursement and collection transaction activity for each appropriation account in its records for the Air Force General Funds to another monthly report from Treasury that shows the activity reported by all agencies for each fund account. Since, as previously explained, Treasury receives some of the disbursement and collection activity directly from other entities before DFAS Denver, timing differences often occur. DFAS Denver then identifies and reconciles any timing differences or errors. Timing differences are resolved through the normal course of DFAS Denver’s staff recording transactions in Air Force records. To correct errors, including those made by other agencies in reporting Air Force fund account transactions to Treasury, DFAS Denver’s staff records adjustments to Air Force records and reports them to Treasury and the other agencies as appropriate. Figure 2 summarizes the second part of the FBWT reconciliation process. The reconciliation process at DFAS Denver and the other DOD components is complicated by the long-standing problem of a lack of integrated systems within and among the components. DFAS Denver currently depends on file extracts from multiple systems for its reconciliations. DFAS Denver’s staff analyzes the numerous extracts and determines the causes of the differences in the multiple systems. To be effective, this reconciliation process must be comprehensive to overcome and compensate for the lack of integration among its systems. Our objectives were to determine (1) the progress DFAS Denver has made in improving its processes for reconciling the transaction activity in the Air Force General Funds and (2) whether any of DFAS Denver’s reconciliation concepts or policies could be used in reconciling the Fund Balance with Treasury activity of the other DOD components. Our review focused on the General Funds reconciliations and did not include Air Force Working Capital Funds reconciliations. To determine the extent of progress DFAS Denver has made in improving the reconciliation of the transaction activity in the Air Force General Funds, we met with DFAS Denver officials and observed DFAS Denver procedures for monitoring the field reconciliation efforts. We obtained reports of outstanding differences for the Air Force General Funds from DFAS Denver and Treasury. We determined whether existing DFAS Denver policies, procedures, and practices reflected the need for improvements outlined in prior year audit reports issued by GAO, DOD’s Inspector General, and the Air Force Audit Agency. To determine whether the DFAS Denver reconciliation concepts, policies, and practices could be used across DOD, we met with DFAS headquarters, Denver, Cleveland, and Indianapolis officials to identify the similarities and differences among DFAS centers and the FBWT reconciliation initiatives in place at each center. To determine the progress other centers had reported in reconciling their FBWT accounts, we obtained reports of outstanding differences from DFAS headquarters and Treasury. The scope of our review at DFAS Denver focused solely on evaluating the processes used to reconcile the Air Force General Funds activity and did not include detailed testing of its reconciliations or of data provided by Treasury or the Air Force Audit Agency. Also, we did not determine whether DFAS Denver’s policies and processes are uniformly in place throughout all of its field locations. We did not audit the Air Force’s FBWT reconciliation and thus provide no conclusions as to whether the processes discussed in this report are being effectively performed. We performed our work from August 2000 through April 2001 in accordance with generally accepted government auditing standards. Written comments on a draft of this report were received from the Director of Accounting, DFAS, and have been reprinted in appendix I. DFAS Denver has developed a two-part process for reconciling its FBWT receipt and disbursement activity that reconciles differences in (1) cash transactions identified by Treasury and (2) Air Force transaction records compared to transaction activity reported to Treasury. Over the past few years, by increasing management attention on the reconciliation process, DFAS Denver has made improvements in both parts of the process and has reported a corresponding reduction in its unreconciled differences. However, its reconciliation processes are not yet fully refined. In prior years, auditors identified and reported weaknesses in DFAS Denver’s ability to effectively reconcile the cash activity part of its FBWT reconciliation. For example, in reporting on the results of its audit of the Air Force’s fiscal year 1997 financial statements, the Air Force Audit Agency noted that DFAS Denver field personnel did not promptly research and correct deposit and disbursement differences identified by Treasury.In addition, the Air Force Audit Agency identified internal control weaknesses for fiscal year 1998 related to the (1) monitoring and reconciliation of check totals, (2) timely reporting of checks, and (3) prompt resolution of check amount discrepancies. In recent years, DFAS Denver has increased the management attention given to resolving cash differences identified by Treasury, which is part one of the FBWT reconciliation process. At the heart of its efforts are several initiatives to improve its processes for identifying, researching, and resolving the differences. For example, DFAS Denver has implemented new procedures for reconciling deposit and electronic funds transfer transactions. Each month, DFAS Denver produces exception reports containing specific transactions that have been reported to Treasury (1) by the Federal Reserve but not by DFAS field personnel, (2) by DFAS field personnel but not the Federal Reserve, and (3) in different months, for different amounts, or otherwise reported differently by DFAS field personnel and the Federal Reserve. DFAS Denver provides these lists each month to field accounting personnel to aid them in resolving differences. DFAS Denver personnel monitor the timeliness of field resolution of these differences and contact field personnel regarding aged unresolved amounts. In addition to improving its reconciliation processes for deposits and electronic funds transfers, DFAS Denver also has improved its methods of monitoring differences related to paper checks. DFAS Denver receives a Treasury notification of individual paper check errors throughout the month as Treasury identifies discrepancies between the check amount reported by DFAS Denver and the amount paid by the bank. In addition, Treasury also reports these check discrepancies in a summary comparison report sent to DFAS Denver after month-end. DFAS Denver has added a procedure to monitor and correct the individual check errors prior to receiving the monthly summary comparison reports from Treasury. Other initiatives DFAS Denver has undertaken include adding a new section to the mandatory training class for new disbursing officers describing procedures for clearing differences reported by Treasury; adding a section to the DFAS Web page providing detailed instructions for DFAS Denver and field accounting personnel for resolving cash transaction differences; increasing the use of electronic funds transfers rather than paper checks (issuing electronic funds transfers is a more automated process than issuing paper checks, and, since the transaction occurs immediately, timing differences are virtually eliminated); and issuing memorandums requiring field personnel to increase the priority given to resolving FBWT differences identified by Treasury. These proactive initiatives have been a major factor in DFAS Denver’s reported reduction in cash transaction discrepancies. For example, according to Treasury reports as of September 30, 2000, the current net unresolved cash differences from 2 months to 1 year totaled less than $400,000 compared to $26 million as of September 30, 1998. DFAS Denver’s experience also provides evidence that not performing routine reconciliation can result in differences getting so old that they become difficult to reconcile. Treasury records as of September 30, 2000, show $56 million still outstanding in net unreconciled cash differences that occurred over 5 years ago before the new reconciliation procedures were in place. DFAS Denver has found it difficult to locate supporting documentation to determine the causes of these old differences. The records also show that DFAS Denver has only $260,000 net unreconciled differences that are from 1 to 5 years old. Every month, timing differences occur between when Treasury and Air Force receive and record transactions. These differences are caused by the lack of integrated accounting systems and routine business processing. Consequently, DFAS Denver must routinely reconcile its transaction records to those at Treasury. Prior to fiscal year 1998, DFAS Denver was not reconciling these monthly differences in the two sets of records. Over the past 3 years, DFAS Denver developed the second part of the overall reconciliation process to reconcile the difference between its records and those at Treasury as shown in figure 3. DFAS Denver’s goal for this process is to identify the transactions that make up the difference, categorize them to facilitate reconciliation, and track them until they are reconciled. The first step is to determine the difference between Treasury and Air Force records each month. DFAS Denver does this by comparing the total Air Force disbursement and receipt transactions in Treasury’s records to the comparable transactions in Air Force accounting records and calculating the difference. This is the amount that has to be reconciled, which was $1.6 billion at September 30, 2000. Step two is a monthly data analysis process to identify the specific transactions making up the difference calculated in step one. DFAS Denver refers to the calculated difference in the two sets of records as the undistributed difference. The term “undistributed” applies to those transactions that have not yet been reconciled—recorded or corrected in the accounting records. To identify the transactions, DFAS Denver uses data retrieval and analysis tools to extract the transactions in DFAS Denver’s Merged Accounting and Fund Reporting system. The function of this system is to track transactions from the time DFAS Denver receives them from either Treasury or the originators of the transactions until Air Force personnel reconcile them. Once DFAS Denver identifies the transactions that make up the unreconciled difference, it sorts them by appropriation into various categories to help speed and simplify the reconciliation process. Sorting the transactions into categories with common elements facilitates tracking the transactions until the field-level accounting staff fully reconciles them by either recording them in Air Force accounting records, making corrections to the records, or submitting adjustments or corrections to the originators of the transactions. DFAS Denver has developed 11 categories that reflect the nature of transactions. For example, the categories into which the unreconciled transactions are sorted include the following. Army-Navy Current Month. The Army and the Navy, which make payments on behalf of the Air Force, cite Air Force appropriations when they submit the payment information to Treasury. Since Air Force field locations often have not yet received the detailed accounting transactions for these payments, these transactions are placed in this category awaiting reconciliation. Rejects. This category is used when Air Force field locations cannot verify payments made by someone else on their behalf. This can happen when they determine that they have not been provided sufficient supporting documentation to post the transaction or the payment belongs to another accounting station. The field locations “reject” the individual payment back to DFAS Denver for transmission either back to the originator of the payment or to another field location. Interfund. DOD components sometimes use the interfund system to sell materials to each other. If the seller and buyer do not record the transactions in the same month, which often occurs, it automatically appears as a reconciling difference between DFAS Denver’s records and Treasury’s records and would be placed in this category for reconciliation purposes. Reducing the total undistributed amount is important because fewer transactions will have to be tracked until reconciled. However, the use of nonintegrated systems and routine business processes does not permit the simultaneous processing of transactions and affects when transactions are recorded. Therefore, eliminating the undistributed amount entirely is not possible because timing differences will continue to cause a difference between DFAS Denver’s and Treasury’s records that will need to be identified and reconciled. DFAS Denver’s analysis cannot yet identify all the transactions that make up the total undistributed difference. The amount that is not identified is referred to as the “variance.” Eliminating the variance is important because the variance constitutes the amount of receipt and disbursement activity for which DFAS Denver cannot identify transactions. Without first identifying the transactions, DFAS Denver cannot reconcile the activity. As figure 4 shows, DFAS Denver has reported progress in reducing both the variance and the total undistributed amount. As of September 30, 2000, the reported variance for all appropriations totaled $35 million, or about 2 percent, of the $1.6 billion in total difference in Treasury’s and DFAS Denver’s records. As of September 30, 1998, the reported variance was $386 million, or almost 10 percent, of the $3.7 billion in total difference. At the time of our review, DFAS Denver had additional efforts under way to refine its methodology for identifying transactions for the remaining variance. DFAS Denver officials told us that by April 2001 they had identified causes and potential explanations for all but $2 million of the $35 million variance as of September 30, 2000. However, they will not be able to prevent the variance from continuing until they have learned how to consistently identify the types of transactions that were causing the variance. DFAS Denver’s analysis of undistributed transactions is crucial to part two of the overall reconciliation process, and the progress DFAS Denver has made in reducing the reported variance is commendable. However, the undistributed analysis is incomplete because two types of transactions are not subject to the analysis. As discussed in the following section, adding these transactions to the analysis is one of the needed refinements to the reconciliation process. Step three is the tracking process. DFAS Denver tracks all undistributed transactions until its field-level accounting staff fully reconciles them by either recording them in Air Force accounting records or making corrections to Air Force or Treasury records. In this step, DFAS Denver transmits lists of undistributed transactions in aging categories to the field- level accounting stations each month and requires them to return the lists annotated with a proposed resolution for each transaction. To complete the loop, DFAS Denver personnel are to monitor that the annotated resolution does, in fact, take place by examining subsequent accounting cycles for evidence of the action. DFAS Denver measures the success of its tracking efforts against performance metrics for reconciling transactions established by DFAS headquarters. These time frame performance metrics range from 60 to 180 days, depending on the type of transaction. DFAS Denver data indicate that about 85 percent of the total undistributed transactions are reconciled within 60 days, so DFAS Denver’s primary focus is on the other 15 percent, although it tracks all undistributed transactions until they are reconciled. DFAS Denver has reported progress in reducing the volume of transactions that fall outside the established time frame performance metrics for reconciling identified transactions. As figure 5 shows, according to DFAS Denver reports, it reduced the portion of the undistributed transactions shown in figure 4 that were not reconciled within the performance time frame metrics from $234 million at the end of fiscal year 1998 to $37 million at the end of 2000. Although DFAS Denver has made progress in developing its reconciliation process to fully reconcile the differences between Treasury’s and its own FBWT records for the Air Force General Funds, it has not yet achieved that goal. First, DFAS Denver has not documented the overall reconciliation process with explanations of the individual steps, their objectives, and their associated comparisons and reconciliations. Such a description could provide both a road map for the entire process and a means for ensuring that the FBWT activity reconciliation is complete and thorough. In addition to potentially omitting important activities without a complete description of the process, a loss of one or more of the few key people who understand the entire process, especially part two, would jeopardize DFAS Denver’s ability to maintain its reconciliation progress and to continue needed refinements. Second, in addition to the need to document the overall reconciliation process, we identified three refinements to part two of the process that are necessary. Identification of Transactions for Remaining Variance. As discussed above, DFAS Denver has not yet determined the causes of the remaining difference between its and Treasury’s receipt and disbursement activity or the transactions that make it up. Because all transactions have not been identified and because the variance fluctuates somewhat from month to month, further analysis is necessary. Until DFAS Denver can identify all of the transactions that make up the variance, it will not be able to fully reconcile the difference. Undistributed Analysis Incomplete. DFAS Denver has not included two types of unreconciled transactions in its analysis of the undistributed transactions because they are not in the Merged Accounting and Fund Reporting system. The first type consists of the transactions that field-level personnel have accepted, processed, and entered into the Air Force accounting records but not yet matched to obligations. Field-level accounting staffs recorded these transactions even though DFAS does not consider transactions ready to be recorded in Air Force’s official accounting records until they have been matched with their obligations. The second type consists of the transactions recorded temporarily in Treasury suspense accounts. DFAS Denver and Treasury use these suspense accounts to record receipt or disbursement transactions pending identification of the fund holders. DFAS Denver includes these transactions in the undistributed analysis at fiscal year-end but not routinely every month. Until both types of transactions are routinely identified by appropriate data extracts and included in the monthly analysis of undistributed transactions, DFAS Denver will not have assurance that it has identified the complete universe of transactions that must be reconciled. During our review, DFAS Denver agreed to begin including both types of transactions in the monthly undistributed analysis. Lack of Documentation for Specific Desk Procedures. DFAS Denver has not fully documented, with how-to desk procedures, some of the steps and activities within the second part of the reconciliation process. For example, the various categories of transactions displayed in the Undistributed Report and the Merged Accounting and Fund Reporting system files from which they are extracted are defined and documented, but the techniques and specific procedures for performing the analysis and developing the report are not. DFAS Denver began a project to document these techniques and procedures during our review. Recreating these management tools without documentation would be difficult for someone who was not familiar with the process. How-to procedures can ensure that the process can be replicated over time. Furthermore, in some instances, only one or two key individuals developed these and other procedures and know how to perform them. Without complete documentation, the loss of a few key people could put DFAS Denver at risk of losing its momentum in reconciling its FBWT activity. The FBWT reconciliation concepts, policies, and procedures developed at DFAS Denver could be used by other DFAS centers, which have not made as much progress in reconciling their FBWT activity, according to DFAS reports and officials. The other centers have not been as successful as DFAS Denver has in identifying transactions that constitute the undistributed difference between their DOD components’ accounting records and Treasury’s. For example, even though their overall General Funds operating expenditures are comparable to Air Force’s, the Army and Navy variances—the amounts for which they cannot determine specific transactions—are substantially larger. As stated previously, DFAS Denver reported a $35 million variance as of September 30, 2000, for the Air Force. In comparison, DFAS Cleveland reported a $5.8 billion variance for the Navy, and DFAS Indianapolis reported $664 million for the Army. DOD’s legacy accounting systems complicate the FBWT reconciliation process. These systems are not integrated, which causes timing differences in processing receipts and transactions since all DOD components pay bills for each other. In addition, the systems were not designed to facilitate the reconciliation process. Although DOD has had plans under way for years to create integrated systems, it is likely many years away from implementing fully integrated financial management systems. Nevertheless, the DFAS centers must reconcile their FBWT activity. Each DFAS center processes billions of dollars of transactions each month that must be accounted for and reconciled. Consequently, the centers must create auditable FBWT activity reconciliation processes. To facilitate its efforts, DFAS Denver has designed interim workaround measures, such as its data extracts, to identify undistributed transactions to create useful reconciliation data. DFAS Denver’s efforts have demonstrated that current DOD systems can be adapted for routine financial reconciliations if used creatively and with perseverance. Transferring DFAS Denver’s experiences to the other centers is reasonable even though each center relies on different legacy systems, which cause them to operate differently to accomplish the same tasks. Since each center’s systems are different, it is the concepts and general approach to developing processes and practices developed at DFAS Denver that can be adapted and utilized, rather than the specific steps in the processes. An example is DFAS Denver’s concept of identifying, categorizing, and tracking undistributed transactions as illustrated in figure 3. After comparing their records to Treasury’s, the other DFAS centers could first identify the individual transactions that make up the difference between their records and Treasury’s. After identifying the individual transactions, they could categorize them by type to facilitate reconciliation. Finally, they could track and monitor the transactions until they are reconciled at the field level. DFAS Denver has demonstrated that increased management attention can, indeed, result in positive change. Reconciliation of FBWT—a key step in DOD’s ability to establish adequate funds control and financial accountability—will only be achieved if the other centers follow DFAS Denver’s lead and provide the needed attention to this area. Increased attention, improved monitoring, and adaptation of the concepts used by DFAS Denver will help all of the DOD components to reconcile their FBWT transaction activity. In addition, a comprehensive reconciliation process can facilitate achieving a successful audit of only that year’s FBWT transaction activity. However, one year’s successful audit of the reconciliation of FBWT activity will not result in an auditable FBWT financial statement balance because the issue of verifying and auditing the beginning balances will remain. The balances in each FBWT account roll forward from year to year until the account is closed, which can be 5 years or more, depending on the type of appropriation. For example, the DOD-wide financial statement reported a FBWT balance of $178 billion as of September 30, 2000. Some portion of this can be attributed to the beginning balance of $175 billion in FBWT brought forward on October 1, 1999. Although one year’s audit of current activity will not resolve this issue, a series of successful audits can. After a number of years, if current activity is routinely reconciled and audited, the balances from prior years when reconciliations were not routinely being performed will ultimately be immaterial. One other issue that affects the reliability of the amount of DOD funds available for expenditure in each appropriated fund is DOD’s practice of making large amounts of adjustments to closed accounts. For example, as we discussed in our May 2001 testimony on DOD financial management,DOD reported $2.7 billion of adjustments to closed appropriation accounts in fiscal year 2000. Although closed accounts are not included in FBWT balances, we reported that DOD made frequent adjustments to move transactions from current accounts and charge them to closed accounts. Until all of DOD’s transactions are accurately recorded, the reliability of FBWT amounts will remain questionable. DFAS Denver has made progress in developing an auditable process capable of fully reconciling its FBWT activity, but it has more to do to finish the job. For example, DFAS Denver has not yet identified all of the transactions that make up the difference between its and Treasury’s receipt and disbursement activity. Finally, a significant amount of the progress has been highly dependent upon the work of a few key people, but their efforts have not been captured in detailed documented procedures. Consequently, if these people were lost, DFAS Denver would risk being unable to institutionalize these processes and losing the momentum it has gained. The reconciliation process is convoluted in that it involves extracting and comparing data from several DOD systems, which are not fully integrated. However, other DOD components do not have to wait for future system enhancements to institute good financial management practices. DFAS Denver’s experience demonstrates that an effective combination of people, policies, procedures, and controls can serve as a short-term solution to the larger and longer term problem of overhauling inadequate systems. The concepts used at DFAS Denver can be adapted by other DFAS centers. However, each center will have to develop its own procedures, data extracts, comparisons, and reconciliations based on the systems it uses. To further improve the reconciliation of the activity in Air Force Fund Balance with Treasury General Fund accounts and to ensure that the process is comprehensive and institutionalized with continuity of effort, we recommend that the Director, DFAS, direct the Director, DFAS Denver, to further refine the reconciliation process to identify and include all transactions that make up the differences between Air Force and Treasury records and resolve these differences within established time frames; and document the entire Fund Balance with Treasury General Funds activity reconciliation process, including specific procedures for the various reconciliations within the overall process. To improve and expedite the reconciliation processes at other DOD components, we recommend that the Director, DFAS, require the other DFAS centers to adapt DFAS Denver’s reconciliation concepts and practices to their own environments. To ensure that each center’s adaptations are consistent and in accordance with DFAS policies, we further recommend that the Director, DFAS, provide leadership and assistance in transferring knowledge from DFAS Denver to the other centers. In written comments on a draft of this report, DOD concurred with our recommendations. DOD’s response described the actions that DFAS has underway to address each recommendation and provided estimated completion dates. We are sending copies of this report to the Under Secretary of Defense (Comptroller and Chief Financial Officer); the Commissioner of the Financial Management Service, Department of the Treasury; and the directors of the four other DFAS Centers: DFAS Cleveland, DFAS Columbus, DFAS Indianapolis, and DFAS Kansas City. Copies will be made available to others upon request. Please contact Linda Garrison at (404) 679-1902 or by e-mail at [email protected] if you or your staffs have any questions about this report. GAO staff making key contributions to this report were Ray Bush, Francine DelVecchio, David Shoemaker, and Carolyn Voltz. Financial Audit: Issues Regarding Reconciliations of Fund Balances With Treasury Accounts (GAO/AIMD-99-271, September 17, 1999). Financial Audit: Issues Regarding Reconciliations of Fund Balances With Treasury Accounts (GAO/AIMD-99-3, October 14, 1998). Financial Audit: Reconciliation of Fund Balances With Treasury (GAO/AIMD-97-104R, June 24, 1997). Performance Measures for Disbursing Stations (Report No. D-2001-024, December 22, 2000). Disclosure of Differences in Deposits, Interagency Transfers, and Checks Issued in the FY 1999 DOD Agency-Wide Financial Statements (Report No. D-2000-123, May 18, 2000). Accounting for Selected Assets and Liabilities – Fund Balance With Treasury, Fiscal Year 1999 (99053001, August 28, 2000). Accounting for Selected Assets and Liabilities – Fund Balance With Treasury, Fiscal Year 1998 (98053001, January 6, 2000). Accounting for Selected Assets and Liabilities, Fiscal Year 1997 Air Force Consolidated Financial Statements (97053001, September 3, 1998). The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are also accepted. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Orders by mail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Orders by visiting: Room 1100 700 4th St., NW (corner of 4th and G Sts. NW) Washington, DC 20013 Orders by phone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm E-mail: [email protected] 1-800-424-5454 (automated answering system) | The Department of Defense (DOD) has had longstanding problems in reconciling the transaction activity in its Fund Balance with Treasury accounts. These reconciliation problems hamper DOD's ability to prepare auditable financial statements and have prompted GAO to place DOD financial management on its list of government activities at high risk for waste, fraud, abuse, and mismanagement. In August 1998, DOD developed a strategic plan to improve the reconciliation process for the activity in its Fund Balance with Treasury accounts. DOD reported that the Defense Finance and Accounting Service's (DFAS) Denver Center, which provides support for the Air Force, has made the most progress in implementing this plan and that its process for reconciling the activity in the Air Force General Funds is more comprehensive than that of the other DOD components. This report reviews the Denver center's reconciliation processes to determine (1) the progress the Denver center has made in reconciling the transaction activity in the Air Force General Funds and (2) whether the Denver center's reconciliation concepts, policies, and practices could be used in reconciling the Fund Balance with Treasury activity of other DOD components. GAO found that (1) the Denver center has made progress in developing a comprehensive reconciliation process for the Air Force General Funds' transaction activity in the Fund Balance with Treasury accounts, primarily by increasing management attention. GAO also found that the concepts and policies developed by the Denver center to identify and resolve transaction differences could improve the reconciliation processes of the other DFAS centers that have not made as much progress. |
Federal crop insurance protects participating farmers against crop losses caused by perils such as droughts, floods, hurricanes, and other natural disasters. Since 1981—the first year in which the government enlisted private insurance companies to sell and service crop insurance—federally subsidized multiple-peril crop insurance has been a principal means of managing the risk associated with crop losses. Federal crop insurance offers producers two primary levels of insurance coverage, catastrophic and buyup, which are available for major crops. Catastrophic insurance, created by the Federal Crop Insurance Reform and Department of Agriculture Reorganization Act of 1994, was designed to provide producers with protection against extreme crop losses for a small processing fee. Buyup insurance protects against more typical and smaller crop losses in exchange for a producer-paid premium. Table 1 shows the levels of coverage available through federal crop insurance. USDA’s Risk Management Agency establishes the premiums, terms, and conditions for federal crop insurance and manages the program. When producers obtain insurance coverage, the government subsidizes the total premium for catastrophic insurance and a portion of the premium for more expensive buyup insurance. Specifically, for every dollar of buyup premium, the government subsidizes an average of 40 cents and the producer pays roughly 60 cents. Under the terms of a negotiated agreement, 17 insurance companies sell crop insurance and process claims. USDA pays these companies an administrative fee for these services. For example, the government reimburses the participating insurance companies 24.5 cents for every dollar of buyup insurance premium and 11 cents for catastrophic insurance. Furthermore, the companies share underwriting profits (the difference between premiums and claims) as well as a limited portion of any underwriting losses with the government. However, the government absorbs the vast majority of losses. Nonspecialty crops have experienced higher losses than specialty crops. Beginning in October 1998, USDA is required to achieve actuarial soundness, defined as a loss ratio of 1.075: That is, for every dollar in premiums, including the portion paid by the government, the claims paid would be expected to average no more than $1.075. For 1981 through 1998, the claims paid averaged $0.99 per $1.00 of premium for specialty crops, compared with $1.12 per $1.00 of premium for nonspecialty crops. Appendix I provides information on crop insurance for 1998 and the loss ratio experience by each crop since 1981. The cost of the federal crop insurance program—including premium subsidies, company reimbursements, and underwriting losses—has averaged about $1.4 billion annually since 1995 and is estimated to be $1.6 billion for 1999. In 1998, specialty crops, such as grapes, oranges, almonds, and tomatoes, represented about 13 percent of the government’s costs. Many specialty crops, however, are not covered by federal crop insurance but are instead covered by the Noninsured Crop Disaster Assistance Program, which was created by the 1994 reform act. For an individual producer who suffers a loss, this assistance program provides protection only when an area—such as an entire county—suffers a loss. Thus, unlike federal crop insurance, this program is tied to an area’s losses rather than to an individual producer’s losses. The Agricultural Research, Extension, and Education Reform Act of 1998 temporarily raised the effective cost of catastrophic insurance from $50 per policy to the higher of $60 or $10 plus 10 percent of the calculated premium. The higher fee was enacted as a budget offset to provide permanent funding to pay the commissions of agents selling federal crop insurance policies. However, the appropriations act for fiscal year 1999 replaced this provision, requiring that all purchasers of catastrophic insurance pay no more than $60 per policy. Although the Congress has made a number of changes to the crop insurance program to encourage participation, the program has had a relatively low level of participation in terms of acres planted and insured. As shown in table 2, only about 51 percent and 64 percent of specialty crop and nonspecialty crop acres, respectively, were insured in 1997, the latest year for which complete data were available. This level of participation represents a decline from 1995, particularly for nonspecialty crops. (For a more detailed discussion of participation, see app. II.) USDA insures 52 specialty crops —14 of which have been added since 1994—and plans to begin testing coverage for another 9 specialty crops by 2001. While these 61 crops represent a majority of the value of all specialty crops, insurance coverage will still not be available for about 300 crops, such as taro and parsley. Programs for specialty crop insurance have not expanded more rapidly because USDA follows a deliberate multistep process to ensure that the programs it develops are actuarially sound. The process includes collecting and analyzing data, setting appropriate premiums, and testing and evaluating the program. This process can be lengthy, typically requiring about 5 years, because, among other things, the data on production history needed to develop a specialty crop program are often not readily available. According to USDA, while the development process is necessary to ensure actuarial soundness, additional resources would allow it to evaluate more crops concurrently. Between 1981 and 1994, USDA developed insurance programs for 38 specialty crops. Since the implementation of the 1994 reform act, which encouraged USDA to develop additional plans for specialty crops, the Department has developed 14 specialty crop programs, as shown in table 3. Including the 14 additions, the total number of specialty crops currently covered by the federal crop insurance program is 52. USDA expects to offer insurance for many other specialty crops over the next several years. By 2001, USDA plans to add nine new specialty crops, including, for example, cucumbers, mint, and strawberries. These 61 crops represent about 85 percent of the market value of all specialty crops. Along with adding new crops to the program, USDA expanded insurance coverage for specialty crops in other ways, including allowing producers to insure by crop variety and making the insurance of existing crops available in additional areas. For example, in 1995, USDA broadened crop insurance for grapes by offering catastrophic coverage for individual grape varieties, such as zinfandel, merlot, and cabernet sauvignon. According to USDA officials, participation—measured in terms of acres insured—increased in 1996 and 1997 after this change was instituted. In 1996, USDA expanded crop insurance for citrus trees from three counties in Texas, where it had been offered since 1983, to an additional five counties in Florida. Moreover, in 1999, USDA began pilot testing a new plan—known as adjusted gross revenue—in selected counties in Florida, Maine, Massachusetts, Michigan, and New Hampshire. This new insurance plan will provide a producer with a guaranteed level of income, which will be determined by the producer’s reported farm income for the past 5 years. It will also provide coverage for all specialty and nonspecialty crops as well as some livestock. Despite this progress, many crops remain uninsured, and many covered crops are not insured in all the areas where they are grown. USDA does not offer insurance for about 300 commercially grown specialty crops, which represent about 15 percent of the economic value of specialty crops grown in the United States. Many of the crops for which insurance is not available are small crops, such as taro, guava, and parsley, that are grown in limited areas. In addition, although crop insurance may exist for a particular specialty crop, the coverage may not be available in all locations where the crop is grown. For example, crop insurance for grapes is available in selected counties in Arkansas, California, Michigan, Missouri, New York, Ohio, Oregon, Pennsylvania, and Washington but not in other growing areas—specifically, selected counties in Arizona, Georgia, North Carolina, and South Carolina. According to USDA, crop insurance for grapes is not available in these states because producers have shown limited interest. Furthermore, USDA’s authority to offer revenue insurance plans for specialty and nonspecialty crops is legislatively limited by the Federal Crop Insurance Act, as amended. The act only allows USDA to offer revenue insurance on a pilot basis through 2000. According to USDA, legislative changes would be necessary to offer revenue insurance on a permanent basis. USDA’s process for developing specialty crop insurance for a particular crop is deliberate and often time-consuming, typically requiring about 5 years to complete. Specifically, collecting and analyzing data to determine whether a new insurance program is feasible can require 2 years or more, and pilot testing can add another 3 years. According to USDA, while the development process is necessarily thorough to ensure actuarial soundness, additional resources would allow it to evaluate more crops concurrently. Table 4 presents USDA’s multistep development process. In steps 1, 2, and 3—beginning the development process—USDA considers several criteria when selecting a new crop to insure, including legislative mandates, its own initiatives, and requests by producers and commodity groups. Appendix III discusses these criteria and their application to the 14 crops added to the program since 1995. Because data for specialty crops are often not readily available, the program development team collects data about the crop from various sources, including producer organizations and land grant universities. These data concern historical production, growing practices, and the risks associated with producing the crop. Appendix IV discusses the unique risk characteristics of specialty crops. In step 4—specifying the provisions for the new program—the development team develops appropriate premium rates by developing a statistical model using the collected data or by applying premium rates from similar crops. In addition, the team analyzes the collected data to establish insured crop prices and determine loss adjustment standards. Appendix V describes in detail the insurance plans and the rating methods USDA uses to set premiums for specialty crops. In steps 5, 6, and 7—the testing and evaluation phase—USDA introduces the new program on a pilot basis and uses the experience of this pilot to develop empirical data and refine program operations. USDA also ensures that adequate producer participation can be achieved. Adequate participation is generally considered key to achieving the program’s legislative objective of actuarial soundness. Without sufficient participation among producers, opportunities for diversification across various growing conditions and farming practices will be limited, and this limitation will jeopardize the actuarial soundness of the insurance program. For example, USDA developed a pilot revenue insurance policy for almonds in two California counties in 1998, but because premiums for the coverage would have been higher than premiums for already available yield insurance, almond producers indicated they would be unwilling to purchase the revenue coverage. Consequently, USDA did not initiate the program, citing concerns about the program’s actuarial soundness because of expected low participation. In recent years, new marketing strategies for crop insurance have been introduced that use endorsements by producer associations to sell insurance or that pass through administrative savings to producers. These strategies could increase producers’ participation and ultimately reduce the government’s administrative reimbursements to insurance companies, and one of these strategies could also reduce producers’ premiums. At the same time, however, according to USDA, these strategies have some potential disadvantages. For example, USDA is concerned that the strategies could prevent smaller insurance companies from competing if they cannot provide the economic incentives that larger companies provide. USDA is developing draft regulations to govern the use of the new marketing strategies. In recent years, insurance companies have used alternatives to the traditional structure of having independent agents market federal crop insurance to producers. The most common of these alternatives has an insurance company paying a fee to a producer association—such as a cooperative or processor—in exchange for the association’s endorsement and the right to use the association’s name and logo on direct mailings to the association’s members to market federal crop insurance. Since 1995, this new strategy, frequently referred to as an “endorsement agreement,” has principally occurred in California for specialty crops. According to USDA’s Risk Management Agency, three of the companies selling federal crop insurance engaged in an endorsement agreement with at least one producer association in 1998. These endorsements are used mostly for selling catastrophic insurance. Endorsements can contribute to increasing participation in specialty crop insurance programs. For example, according to a large association of California wine grape producers that has an endorsement agreement with one of the insurance companies, participation among the association’s members increased from roughly 20 percent in 1994, prior to entering into the agreement, to about 40 percent in 1998. Similarly, according to a key California citrus cooperative that also has an endorsement agreement, crop insurance premiums for the cooperative’s members increased from about $2.5 million in 1995 to $4 million in 1998, or roughly 60 percent. Producer associations told us that endorsements have been successful because specialty crop producers generally rely on their associations for key information about production practices and risk management. Endorsements may provide other advantages as well. They can lower insurance companies’ delivery costs by enabling the companies to reach their intended audience through targeted marketing to association members. Over the long term, therefore, USDA may be able to share in these savings by reducing the administrative reimbursements it pays to companies. Furthermore, according to USDA, endorsements may allow companies to penetrate market niches not currently reached by independent agents and to promote “one-stop shopping” because many associations and cooperatives provide multiple producer services. Another new marketing strategy, authorized by the 1994 reform act for buyup insurance, could also increase participation. Under this strategy, an insurance company could reduce the premiums charged to a producer if the company can deliver the program for less than its administrative reimbursement from USDA. For example, if the expenses of selling and servicing crop insurance policies are less than the administrative reimbursement, the administrative savings could be passed through to the producer in an effort to increase the company’s share of crop insurance sales. Ultimately, increased sales by a number of companies could raise participation in the crop insurance program and reduce the administrative fees the government pays insurance companies. As of February 1999, USDA had received four proposals to implement this new strategy. Although new marketing strategies may provide certain benefits to the crop insurance program, they may also undermine the program in several ways. First, USDA is concerned that the strategies could harm smaller insurance companies. For example, the strategies could prevent these smaller companies from competing if they cannot provide the economic incentives to producer associations that larger companies provide. Second, with the use of endorsements, USDA has a concern about rebating. Rebating is the offering of any benefit or valuable consideration as an inducement to purchase insurance. Rebating can occur when insurance companies pay producer organizations large endorsement fees to market crop insurance. These organizations could use the fees to provide benefits or services to those producers purchasing the insurance, such as lowering these members’ dues or providing services that are not available to those producers who did not purchase crop insurance. For example, in 1995, one cooperative with an endorsement agreement paid for catastrophic insurance for those members who agreed to sign up for the insurance. According to USDA, the cooperative was funding the cost of the catastrophic insurance from the endorsement fee it received from the insurance company. USDA considered this action to be a form of rebating—a direct inducement to producers to buy the coverage. Consequently, starting in 1996, USDA implemented restrictions against using endorsement fees to pay for catastrophic insurance for producers. Third, according to USDA, these strategies could reduce a company’s ability to diversify its risk over a large geographic area if marketing becomes highly concentrated. Finally, USDA believes that new marketing strategies may jeopardize its use of producer associations to independently verify data for rating, coverage, and claim calculations. This could occur because associations would be involved in selling crop insurance to their members while at the same time maintaining the production records USDA uses to settle claims. To address these potential problems, USDA is developing new regulations to govern the use of alternative marketing strategies. The draft regulations require that insurance companies selling federal crop insurance submit all marketing agreements and endorsements to USDA for approval prior to implementing them. This step is designed to ensure that these agreements and endorsements are in compliance with regulations and that the program is safeguarded. In addition, in 1999, USDA’s Risk Management Agency expects to initiate a review of new marketing strategies that will evaluate potential advantages and disadvantages in further detail. Under the now-rescinded provision of the Agricultural Research, Extension, and Education Reform Act of 1998 (P.L. 105-185, June 23, 1998), the processing fee for catastrophic insurance for many specialty crop producers would have been significantly higher than $50—as much as $3,821—and participation would have declined. While we were unable to estimate the magnitude of the decline, available studies for crop insurance show that, in general, for each 10-percent increase in insurance costs to producers, there is a 2- to 9-percent decrease in participation. According to our analysis of 1997 sales for catastrophic crop insurance, the average fee of all specialty crop policies would have increased from $50 to $189 had the 1998 provision gone into effect. Table 5 shows the average fees that would have resulted from proposed fee increases and the percentage of policies affected in different premium ranges. The average fees shown reflect the amount producers would have paid if the processing fees had been increased to the greater of $60 or 10 percent of the calculated premium plus $10. For 15 percent of the policies, the average fee would have risen from $50 to $487, and for the top 2 percent of the policies, the average fee would have risen from $50 to $3,821. As the table shows, if the higher fee schedule had been implemented, the average fee would have been greater for specialty crop producers than for nonspecialty crop producers. This is because specialty crops have a higher value than nonspecialty crops—a key determinant in calculating premiums—making insurance for specialty crops generally more costly per acre. For example, the average value of six major nonspecialty crops ranges from about $120 to $720 per acre. In comparison, the value of a single specialty crop can be as high as about $8,800 per acre. According to available studies on nonspecialty crops and experts we spoke with, fee increases would lead to lower participation. However, the magnitude of the effect on participation is unclear. The studies indicate that a 10-percent increase in cost to the producer would result in a 2- to 9-percent decrease in participation. In addition, if the cost increase were larger, the decline in participation would be correspondingly larger. The data from these studies deal with specific crops, regions, and time periods. Furthermore, these studies generally looked at nonspecialty crops, such as corn and wheat, as well as at buyup crop insurance prior to the introduction of catastrophic insurance, and are therefore most relevant to buyup insurance. For these reasons, it is not possible to project directly from these studies to determine how much lower participation in specialty crop insurance would be as a result of an increase in fees. While premiums can affect producers’ participation, other factors, such as the availability of federal payments for crop losses, can influence a producer’s decision to purchase crop insurance. If producers believe that disaster relief will be forthcoming when growing or market conditions are poor, they could view federal payments for crop losses as a free substitute for crop insurance. Under these conditions, federal payments could have the unintended effect of reducing participation. We provided USDA with a draft of this report for review and comment. USDA made a number of technical comments and suggestions, which we incorporated, as appropriate. USDA’s comments and our responses are presented in detail in appendix VI. To determine the progress USDA has made in expanding federal insurance coverage for specialty crops, we reviewed agency documentation and discussed with USDA officials their efforts to expand the number of locations for existing specialty crop programs and to develop new programs. We described the methods used to develop premiums for specialty crop insurance programs by summarizing the basic specialty crop plans and rating methods used by USDA. We also interviewed selected agency officials and academicians familiar with the specialty crop insurance area. To review the new marketing practices insurance companies have introduced for specialty crops and to identify potential advantages and disadvantages of the practices, including their effect on producers’ participation, we reviewed pertinent documents from USDA and producer associations. Our analysis included discussions with USDA as well as with selected producer associations and insurance companies in key specialty crop states, including California and Florida. To examine the potential effect of increased insurance costs on specialty crop producers’ participation in the crop insurance program, we analyzed USDA’s crop insurance databases to determine what the impact would have been for different policy sizes if the increases had been applied to catastrophic insurance in 1997. We also reviewed studies performed by economists and academic experts on producers’ responses to changes in the price for crop insurance. We conducted our review from June 1998 through March 1999 in accordance with generally accepted government auditing standards. Although we did not independently assess the accuracy and reliability of USDA’s computerized databases, we used the same files USDA uses to manage the crop insurance program, which are the only data available. We are sending copies of this report to Senator Richard Lugar, Chairman, and Senator Tom Harkin, Ranking Minority Member, Senate Committee on Agriculture, Nutrition, and Forestry; Representative Larry Combest, Chairman, and Representative Charles Stenholm, Ranking Minority Member, House Committee on Agriculture. We are also sending copies of this report to: The Honorable Dan Glickman, Secretary of Agriculture; The Honorable Kenneth Ackerman, Administrator of the Risk Management Agency; and The Honorable Jacob Lew, Director of the Office of Management and Budget. Copies will also be made available to others upon request. If you or your staff have any questions about the report, please contact me on (202) 512-5138. Major contributors to this report are listed in appendix VII. The tables in this appendix show information on crop insurance for 1998 and the loss ratio experienced by each crop since 1981. Table I.1 shows these data for specialty crops, while table I.2 shows these data for nonspecialty crops. Loss ratio since insurance offered through 1998 (continued) Citrus includes grapefruit, lemons, mandarins, murcotts, oranges, tangelos, and tangerines. Stonefruit includes apricots, nectarines, and peaches grown in California. The U.S. Department of Agriculture (USDA) did not report 1998 data for macadamia nuts because the policy was extended in order to accommodate modifications made during 1998. The revised policy is in place for 1999. Nursery, tree, and raisin crops use a measurement other than acres. The tables in this appendix show the percentage of participation, in terms of acres planted and insured, for selected specialty crops for 1997, the latest year complete data were available. Table II.1 shows nationwide participation by specialty crop category; table II.2 shows nationwide participation for a cross-section of specialty crops and the major nonspecialty crops; and table II.3 shows the major specialty crop states and selected specialty crops they produce. Peppers (fresh) Tomatoes (fresh and processed) Apricots (fresh and processed) Nectarines (fresh) Peaches (fresh and processed) Plums (fresh) Tomatoes (processing) Tomatoes (fresh) Peppers (fresh) Sweet corn (fresh) Tomatoes (fresh) (continued) Sweet corn (fresh) Tomatoes (fresh) Tomatoes (processing) Sweet corn (processing) Sweet corn (processing) (continued) Sweet corn (processing) The U.S. Department of Agriculture (USDA) considers several criteria when selecting crops to review for insurance, with requests for insurance for specific crops being a major factor. These requests may come from producers; producer associations; reinsured companies; individual Members of Congress; USDA’s regional service offices; or other USDA agencies, such as the Farm Service Agency. According to USDA, several factors are considered in setting priorities for these requests. First, USDA gives priority consideration to developing new crop insurance programs for crops that, for the most recent year, meet at least one of four criteria for economic significance: (1) within the agricultural statistics district that is to be covered, the value of the crop exceeds $3 million; (2) within the state that is covered, the value of the crop exceeds $9 million; (3) within the area served by the USDA regional service office responsible for administering the insurance program for that crop, the value of the crop exceeds $15 million; or (4) at the national level, the value of the crop exceeds $30 million. Second, USDA considers producer interest, as measured in a number of ways. Specifically, high levels of payments for disaster assistance and the Noninsured Crop Disaster Assistance Program for a crop may signal a potentially high interest among producers of that crop for an insurance program. In addition, USDA relies on the recommendations resulting from the detailed feasibility studies of each crop performed by its Economic Research Service and on recommendations from its regional service offices regarding producer and private company interest. Because USDA considers a number of factors in addition to interest when selecting a crop to review, it may not ultimately develop an insurance program for each of the crops on the list. For example, adequate producer participation is required for a crop insurance program to be actuarially sound. Before implementing a new insurance program, USDA requires documentation showing that a minimum of 10 percent of the crop’s producers would be expected to participate in the insurance program. However, some new programs, once analyzed and properly rated to account for the risks involved, may be too expensive to obtain adequate producer participation. In such cases, USDA may suspend development activity. Furthermore, if a private sector insurance program is generally available, the Federal Crop Insurance Act of 1980 (P.L. 96-365, Sept. 26, 1980) as amended, prohibits USDA from implementing a competing insurance program. In addition, sufficient data must be available to develop an insurance program, including production history, pricing information, an analysis of perils, an analysis of marketing channels, and other pertinent information. Generally, USDA obtains this information from producers, but it often obtains information from other sources, including producer associations and land grant universities. If this information is not available or cannot be created, the development of an actuarially sound insurance program may not be feasible. Once crops are selected, the order in which new programs are ready for initial pilot testing can change because of the varying lengths of development cycles. For example, the development of an insurance program for aquaculture—a large and diverse national program for the commercial production of fish—began in 1994, and the development of an insurance program for wild rice began in early 1998. However, because of the complexity of developing the aquaculture program, it will not be ready for implementation until 2000, while the wild rice program, a relatively simple program, was approved for pilot testing for 1999. The eight new crop insurance programs USDA is offering in 1999 meet various priority selection criteria. For example, three of the programs—cabbage, cherries, and watermelons—each exceed $30 million in total U.S. economic value. Two other crop programs—crambe and mustard—are being offered because the crops can be included in a crop rotation cycle with wheat to lessen the impact of the scab disease occurring in North Dakota and surrounding areas. The remaining three crop programs meet other criteria, including legislative mandates and readily available data. In addition, many of the crops scheduled for pilot testing in 2000 or later years have a U.S. economic value exceeding $30 million, such as aquaculture, cucumbers, and strawberries. Table III.1 shows the 31 crops USDA is considering for pilot testing as of March 1999. Furthermore, USDA has received requests for 10 additional crops for which development has not yet begun. These 10 crops are amaranth, chicory, kenaf, lupins, onion seed, ramie, bahia, spelt, turnip roots, and various herbs. While the diverse nature of specialty crops makes describing their insurance risks difficult, they often tend to have several key characteristics in common that differentiate them from the insurance risks presented by nonspecialty crops. These key characteristics are (1) greater market price risk, (2) unique production risks, (3) a strong relationship between crop prices and farm-level yields, and (4) the manner in which risk has traditionally been managed. These characteristics often derive from the high perishability of many specialty crops. For many specialty crops, market price risk is a more important factor than production risk, which is not the case for most nonspecialty crops. Unlike nonspecialty crops, specialty crops are generally highly perishable, often do not store well, and frequently experience greater price volatility. Because of specialty crops’ perishability, it is difficult for producers to adjust to short-run shifts in supply and demand other than by raising or lowering the price. Consequently, many specialty crops, such as fresh market fruits and vegetables, experience a greater degree of price volatility than nonspecialty crops during the growing season. Conversely, because producers can store nonspecialty crops, they can often sell their crop at the most opportune time. Furthermore, unlike most nonspecialty crops, most specialty crops are not traded on commodity exchanges, which precludes producers from using these markets to hedge price risk. While many specialty crops experience greater market price risk because they are more perishable than nonspecialty crops, other specialty crops have fewer production risks, decreasing the need for federal crop insurance. For instance, because many specialty crops are irrigated, they are not subject to drought, which is one of the most significant perils for nonspecialty crops. Certain crops, such as strawberries and tomatoes, can produce fruit for several weeks, reducing the risk that the producer may not be able to harvest because of excess moisture or other perils. Similarly, vegetable producers often tend to grow more than one kind of vegetable during the year or have multiple plantings of the same crop during the growing season. Furthermore, many specialty crops are perennials, such as tree and vine crops, which produce fruit or nuts year after year without replanting. Because a loss normally affects only the fruit or nuts and not the tree or vine, the producer need only to insure for the value of the crop, not the value of the trees or vines. In terms of production costs, specialty crops have total production costs per acre that are higher than those for nonspecialty crops. Therefore, for specialty crops that have high production costs as well as high harvest costs, such as strawberries, insurance liability can be limited by insuring only those costs that are preharvest. As a result, if a loss occurs prior to harvest for a specialty crop, the producer has not yet incurred much of the production costs, reducing the need to be insured for the total value of the crop. As we discussed in 1998, the relationship between crop prices and farm-level yields is an important component of risk assessment because an increase in price caused by a decline in aggregate crop yields can compensate for the effects of decreased production. This tends to be the case when production areas are geographically concentrated. Although negative price-yield relationships are observed for both specialty and nonspecialty crops, for some specialty crops this negative price-yield relationship is much stronger. For example, for some specialty crops, 80 percent of production may be grown in one county in the United States. Therefore, if production in this county decreases, prices can rise dramatically and total revenues at the farm level may stay the same or even increase. That is, while the producer may face greater price variability for growing certain specialty crops, the producer may also experience a positive revenue effect because of the higher price-yield relationship. At the same time, other specialty crops, such as apples, do not have this strong negative relationship between prices and yields. For instance, for apples, because of the diversity in the location of production, a shortage in one part of the country can be replaced by greater production in another part, mitigating the strength of the price-yield relationship for this crop. The need for federal crop insurance for specialty crops is reduced because of another characteristic prevalent in their markets—the use of vertical arrangements such as “producer-processor” contracting to manage both price and production risk. In general, vertical arrangements are the result of market incentives, including risk reduction and the avoidance of processors’ market power, that encourage producers to integrate their operations to include the processing and marketing of their own production. These “producer-processor” relationships can include producers owning marketing and shipping facilities, but they mainly consist of various types of contractual arrangements. For instance, the processing industry for tomatoes in California transacts nearly its entire production through producer-processor contracts. This arrangement reduces risk to the producer and the processor by predetermining a specific price, for a certain variety of tomato, at a specific delivery date. Such coordination of production and marketing is especially advantageous in terms of managing the flow of product in periods of oversupply and low prices, which are common in these industries. Moreover, because many specialty crop producers may not be able to integrate unilaterally, many integrate collectively by forming marketing cooperatives that are active in such functions as storage and processing. Examples of such marketing cooperatives include Sunkist (citrus), Sunsweet (prunes), Calavo (avocados), Sunmaid (raisins), Blue Diamond (almonds), and Diamond Walnut. In California, these marketing cooperatives control half or more of the market volume of these crops. Although many variations exist, the three major categories of specialty crop insurance are (1) yield (production), (2) revenue insurance, and (3) percent-of-damage. In addition, USDA is piloting a new type of plan in 1999 known as the adjusted gross revenue plan. USDA also uses three types of rating methods to calculate premiums for specialty crops. The methods are comparative rating, statistical modeling, and experience rating. For each of these plans, as well as the rating methods, USDA has to customize the insurance for a given crop. For example, a yield plan for one specialty crop would have a different premium structure than the plan for another crop. This is generally not the case for nonspecialty crops covered by federal crop insurance. This section discusses the types of crop insurance plans currently offered or being piloted by USDA. The plans are yield, revenue, and percent-of-damage. For specialty crops, USDA offers three types of yield plans—the actual production history, grower yield certification, and dollar plans. Together, these plans account for a majority of all specialty crop insurance offered by USDA. These three plans guarantee payments on the basis of lost yield. The actual production history plan is the most widely used insurance for specialty crops. Generally, premiums under this plan are calculated similarly for both specialty and nonspecialty crops. The plan guarantees payments that are based on a percentage of the individual producer’s historical yield multiplied by a percentage of a preestablished market price. As with actual production history plans for nonspecialty crops, the specialty crop producer’s premium is generally calculated on the basis of one of nine categories for yield amounts (known as yield spans). The premium rate charged to the producer is based on the yield span in which the producer’s actual production history yield falls and the chosen coverage level—the percent of production that is to be protected. Like the actual production history plan, the grower yield certification plan—sometimes classified as a subset of the actual production history plan—is based on a certain yield per acre. However, in a grower yield certification plan, USDA has set up mapping areas—counties or larger areas—in which the yield guarantee is based on the average historic yield in the producer’s geographic area, instead of a producer’s individual average historic yield. Therefore, all insured producers in a county or designated mapping area receive the same premium rate. Unlike an actual production history plan, a grower yield certification plan has no yield spans for determining premium rates. Under this plan, a claim is paid if a producer’s yield falls short of the expected yield times the selected coverage level. For some crops, however, USDA found that there is enough variability in yields to establish a limited number of yield spans under this plan. In addition, these crops are being converted from grower yield certification plans to actual production history plans, as appropriate. The dollar plan insures certain specialty crops that have fairly consistent costs of production for expenses that are incurred prior to harvest. Therefore, in the event of a crop failure, all producers in a county that participate in this program would be compensated for these preharvest expenses. For each type of crop in a county, the insured guarantee is a fixed dollar amount per acre, reflecting the USDA-calculated preharvest costs of production. USDA bases this fixed dollar amount on the cost of production, expected market prices, and yield information, and often obtains these data from university extension programs. Producers can insure their crop for between 50 and 75 percent of this fixed dollar amount. Because the price of some specialty crops fluctuates considerably, crop revenues are also taken into account to prevent insuring for more than the expected crop return. When insurance claims are settled under the dollar plan, the fixed-dollar guarantee is compared with the dollar value of production, that is, the crop yield times the higher of a USDA price or a market price. If the dollar value of production is less than the fixed-dollar guarantee, the producer receives an insurance payment. In order to receive a payment under this plan, however, the producer must have had a crop loss. When claims are paid for losses, they are adjusted to reflect reduced protection if the crops are destroyed at a stage earlier than harvest. Examples of crops covered under the dollar plan are fresh market tomatoes, peppers and sweet corn. While most specialty crops are insured under one of these three plans, certain crops can be insured under more than one, depending upon such factors as the availability of data in the area and the perceived risks by local USDA representatives. Unlike traditional yield coverage, the revenue insurance plan protects producers from declines in revenue caused by low prices, low yields, or both. In a revenue insurance plan, the guarantee is a producer-chosen percentage (coverage level) of the expected revenue for that particular crop in the market. To establish the preseason revenue guarantee, USDA collects information on the producer’s individual production history and the county average price for the specialty crop. While the revenue insurance plans for nonspecialty crops are more applicable to a broader range of crops, the plans for specialty crops have to be customized for the unique characteristics of each crop. For example, USDA has developed pilot revenue insurance plans of limited scope and duration for avocados and pecans. In 1999, USDA began pilot testing a new type of revenue insurance policy, called adjusted gross revenue, in selected counties in Florida, Maine, Massachusetts, Michigan, and New Hampshire. This new insurance plan will provide a producer with a guaranteed level of income as determined by the producer’s reported farm income for the past 5 years. It will also provide coverage for both specialty and nonspecialty crops as well as some livestock. USDA insures certain fruit crops, trees, and nursery crops, or other perennial crops, with a percent-of-damage plan. There are two different versions of this plan—the “lost quantity” and the “lost value” plans. In both, payments are made when a measured amount of damage exceeds some predetermined deductible. The guarantee for the “lost quantity” plan is based on a percent of damage to the crop, such as damage to a whole tree or to limbs on a tree. USDA must pay an indemnity when the percent of damage, as evaluated by the quantity of totally or partially destroyed property (fruit crop or trees), exceeds the deductible. For the “lost value” plan, the guarantee is based on a dollar amount of protection times a coverage level. USDA pays indemnities when the percent of dollar damage exceeds a deductible. Examples of crops covered under variations of this plan include Florida citrus fruit, Florida and Texas citrus trees, macadamia trees, and nursery plants. Premium rate-setting methods used in the insurance plans for specialty crops include the comparative rating, experience rating, and statistical modeling methods. In general, rating methods for specialty crops tend to be customized for each crop and location. Comparative rating, also called judgmental rate setting, is used whenever the available data are thin or scanty. Generally, some amount of data can be found for a crop in an area, but the scope of the data are not adequate to measure the probable losses under a variety of weather conditions. In such cases, the available data are compared with the insurance experience for crops that have been insured in the area. A judgment as to the relative riskiness is needed: that is, is the crop in question relatively more or less risky than the crop with more adequate data? A premium rate is then established by using the existing premium rates for the reference crop or crops as a benchmark. For the experience rating method, USDA considers only the actual insurance experience of a crop and uses only those data to compute the required premium rate. One example of this method is the calculation of loss-cost ratios to develop premium rates. Briefly, USDA uses average coverage and production data, among other things, to calculate a loss-cost ratio—claims payments divided by liabilities. In order to adequately reflect future losses, many years of historical loss data are typically needed. Statistical modeling uses empirical or assumed probability distributions of key variables and draws thousands of observations from those distributions. At the end of the analysis, the events that resulted in a loss are totaled and divided by the total liability at risk. The result is an estimated premium rate. For example, USDA used statistical modeling to determine rates for the pilot revenue insurance plans for avocados and pecans. Simply put, the premium rates offered in these plans are developed through statistical models that construct a revenue distribution—a depiction of expected farm revenues—on the basis of actual price and yield data. In addition, USDA used statistical modeling in order to set rates for fruit trees in Florida, a program that provides insurance coverage for physical damage to the trees. 1. We agree. The final report was revised to reflect USDA’s comment, as appropriate. 2. We agree and have revised our report to state that the 5 years includes the testing phase of the development process. Also, while we recognize that USDA developed the adjusted gross revenue insurance plan for pilot testing in 14 months, other plans may require longer than 2 years to reach pilot testing, as we discuss in appendix III of our report. For example, the aquaculture plan is in its fifth year of development and has yet to begin testing. 3. We agree and acknowledge in our report that the adjusted gross revenue plan, if successful, will provide coverage for all specialty and nonspecialty crops. 4. We do not believe table 2 of our report is misleading. While crop insurance participation was required in 1995 as a condition of eligibility for certain federal farm programs, participation in recent years has declined, as table 2 shows. In October 1998, the Congress passed major ad hoc disaster assistance legislation because of losses in the Plains States but also because of insufficient participation in the crop insurance program. 5. We agree that since 1994, in addition to developing insurance programs for specialty crops, USDA’s resources have also been used to develop insurance programs for nonspecialty crops. Thus, we have added this information to our report. 6. We agree that the timing of premium increases may influence their acceptance. However, this is one of many factors, such as the level of debt for the farm, held constant in our analysis. 7. We agree and have revised our report to reflect the fact that we are focusing on several key characteristics of specialty crops that differentiate them from the insurance risks presented by nonspecialty crops. These characteristics often derive from the perishable nature of most specialty crops. 8. We agree it is more appropriate to use the terms comparative rating, experience rating, and statistical modeling and have revised our report accordingly. 9. We agree that USDA’s authority to offer revenue insurance plans is limited by the Federal Crop Insurance Act to a pilot program basis. Thus, we revised our report to reflect this limitation. Robert C. Summers, Assistant Director Thomas M. Cook, Evaluator-in-Charge Charles W. Bausell, Jr. Carol E. Bray Ruth Anne Decker Barbara J. El Osta Carol Herrnstadt Shulman The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the availability of the Department of Agriculture's (USDA) federal crop insurance for specialty crops, focusing on: (1) USDA's recent progress in expanding coverage to specialty crops; (2) the new marketing practices insurance companies have introduced for specialty crops and the potential advantages and disadvantages of the practices, including their effect on producers' participation; and (3) the potential effect on participation by producers in the catastrophic crop insurance program if they were charged higher fees. GAO noted that: (1) USDA insures 52 specialty crops and plans to begin testing coverage for another 9 specialty crops by 2001; (2) these 61 crops represent a majority of the value of all specialty crops, but insurance coverage will not be available for about 300 crops; (3) while programs for specialty crop insurance have expanded in recent years, more rapid expansion has not occurred because USDA follows a deliberate multistep process involving the assessment of risk and setting of premiums to ensure that the programs it develops are actuarially sound; (4) this process, including testing, is lengthy, typically requiring about 5 years, because, among other things, the production history data needed to develop a specialty crop program are often not readily available; (5) according to USDA, while the development process cannot be accelerated because of the need to ensure actuarial soundness, additional resources would allow USDA to evaluate more crops concurrently; (6) in recent years, insurance companies have used alternatives to the traditional strategy of having independent agents market federal crop insurance to producers; (7) one alternative strategy uses endorsements--an insurance company pays a fee to a producer association to promote the sale of its insurance product; (8) a proposed strategy would allow an insurance company to pass through administrative savings to producers in the form of reduced premiums; (9) these strategies could increase producers' participation and, ultimately, if USDA chooses to share in these administrative cost savings, reduce the administrative fees the government pays insurance companies; (10) however, these strategies have some potential disadvantages; (11) under the rescinded provision of the Agricultural Research, Extension, and Education Reform Act of 1998, the increase in the processing fee for many specialty crop farmers would have been large and participation would have declined; and (12) while GAO was unable to estimate the magnitude of the decline, available studies on traditional crop insurance show that, in general, for each 10-percent increase, there is a 2- to 9-percent decrease in participation. |
The Army faces enormous equipping challenges while conducting operations in Iraq and Afghanistan, and restructuring to a modular force. The Army has four key initiatives underway that impact efforts to equip the force: the establishment of modular units, expansion of the force, equipment reset, and reconstitution of prepositioned equipment. The Army’s modular restructuring initiative, which began in 2004, is considered the most extensive reorganization of its force since World War II. This transformation was initiated, in part, to support current operations in Iraq and Afghanistan by increasing the number of combat brigades available for deployment overseas. The foundation of modular restructuring is the creation of new, standardized, modular units that change the Army’s legacy division-based force structure to smaller more numerous brigade formations embedded with significant support elements. A key goal of the modularity initiative is for modular brigades to have at least the same combat capability as a brigade under the division- based force. The new modular brigades are expected to be as capable as the Army’s existing brigades partly because they will have different equipment including key enablers such as advanced communications and surveillance equipment. Moreover, in contrast to the Army’s previous division-based force, modular National Guard and Army Reserve units will have the same design, organizational structure, and equipment as their active component counterparts. In addition, the Secretary of Defense announced in January 2007 an initiative to expand the Army by adding more than 74,200 soldiers and thereby creating six active brigade combat teams and additional modular support units. This planned expansion is intended to allow the Army to revitalize and balance the force, reduce deployment periods, increase time soldiers spend at home station in between deployments, increase capability, and strengthen the systems that support the forces. The Army relies on equipment reset and prepositioned equipment to improve equipment availability. Reset is the repair, replacement, and modernization of equipment that has been damaged or lost as a result of combat operations. The Army prepositioned equipment program is an important part of DOD’s overall strategic mobility framework. The Army prepositions equipment at diverse strategic locations around the world in order to field combat-ready forces in days rather than the weeks it would take if equipment had to be moved from the United States to the location of the conflict. The total cost to restructure and rebuild the Army is uncertain but this effort will likely require many billions of dollars and take at least several more years to complete. Our analysis of Army cost estimates and cost data indicate that it is likely to cost at least $190 billion dollars to equip modular units, expand the force, reset equipment, and replace prepositioned equipment from fiscal years 2004 through 2013. However, these estimates have limitations and could change. For example, the Army is likely to continue to have shortfalls of some key equipment beyond then and believes it will require additional funding to equip modular units through fiscal year 2017. Although the Army has not identified a total aggregate cost for its key equipping initiatives, it has previously reported some cost estimates and cost data for equipping modular units, expanding the Army, resetting equipment, and restoring pre-positioned stocks. However, these estimates have some limitations because they are based on incomplete information, have not been updated, or may change as a result of the evolving nature and unknown duration of ongoing operations in Iraq and Afghanistan. As a result, the full costs of these equipping efforts are unclear but will be substantial. Based on our analysis of various sources of Army cost data, it appears that the cost of these initiatives will exceed $190 billion dollars between fiscal years 2004-2013 (see table 1). These figures do not include data on Army longer term transformation efforts such as the Army’s Future Combat System. The John Warner National Defense Authorization Act for Fiscal Year 2007 required the Army to report annually on its progress toward fulfilling requirements for equipment reset, equipping of units transforming to modularity, and reconstitution of equipment in prepositioned stocks. In its February 2008 report, the Army stated that there is no longer a distinguishable difference between equipment purchased for modular restructuring and other modernized fielding. The report does not address future costs in detail, nor does it provide significant detail about progress achieved to date with funds that have already been appropriated. As a result, it is becoming increasingly difficult to track overall progress and costs. The following sections further describe the cost and status of the Army’s key initiatives including modular restructuring, expanding the force, resetting equipment, and restoring pre-positioned stocks. These initiatives will drive much of the costs of equipping the Army for the next several years. The Army has made progress establishing modular units but this initiative will likely cost billions more than the Army originally estimated because the Army’s estimate was based on some assumptions that no longer appear valid and was developed before some modular unit designs had been finalized. As a result, the Army now believes it will require additional funding through fiscal year 2017 to equip its modular units. However, it has not revised its 2005 cost estimate to reflect this. Moreover, because it will take time to procure equipment once funds are appropriated, units may not receive all scheduled equipment until 2019. In early 2005, the Army estimated that converting the Army to a modular design would cost approximately $52.5 billion from fiscal years 2005-2011, which was incorporated in a funding plan approved by the Office of the Secretary of Defense. The funding plan included costs for equipment, sustainment and training, and construction/facilities. As shown in table 2, most of these funds—$43.6 billion—were designated for equipment purchases. The Army made the decision to create modular units knowing that it would take several years after units were established to equip and staff them at authorized levels. At the end of fiscal year 2007, the Army had converted about two-thirds of its force to modular units. By the end of fiscal year 2008, the Army projects it will have converted 277 of 327 modular units (about 85 percent). The Army currently projects that the unit restructuring will be completed by fiscal year 2013. However, our ongoing work shows that the Army will continue to have significant shortfalls of key equipment that are critical to achieving the planned benefits of the modular force after the Army receives planned funding for fiscal years 2005-2011. For example, the Army projects that it will still need hundreds of thousands of modern equipment items including intelligence equipment, advanced radios, and trucks. In place of more modern equipment, many Army units will continue to have some older equipment that does not necessarily provide the same capability as the more modern counterparts. The Army has stated that it plans to request funds through 2017 to help fill modular unit equipment shortfalls. However, it has not revised its initial $43.6 billion estimate, even though it was based upon several assumptions that no longer appear valid. Specifically, we have reported that the Army believes it will need additional funding to equip modular units because its 2005-2011 funding plan: was developed before some modular unit designs had been finalized, assumed that Army National Guard and reserve units would retain some older models of equipment, and assumed that significant quantities of equipment would be returned from Iraq in good enough condition to help equip modular units. Additional explanation of each of these factors follows. At the time the Army’s cost estimate was developed, the Army’s modular designs were incomplete, so budget analysts were uncertain about the exact number of personnel and how many and what type of equipment items would be needed for modular units. For example, on the basis of lessons learned, the Army has reconfigured some of the modular unit designs and has added additional capabilities for force protection and route clearance to counter specific threats faced by deployed units. Further, because the number and composition of National Guard units had not been developed, budget analysts made certain assumptions about how much funding would be required by National Guard units to convert to the new modular designs. When the Army began to implement its modular restructuring initiative, it planned for the National Guard to establish 34 Brigade Combat Teams plus an additional number of support brigades. The 2006 Quadrennial Defense Review, however, recommended that the Army establish only 28 Brigade Combat Teams and convert the remaining units to support brigades. In addition, the Army’s original plan for equipping modular units also did not fully consider the equipping implications associated with the Army National Guard’s changing role in supporting military operations. Since 2001, the Army National Guard’s role has changed from a strategic reserve force to an operational force that is used to support a wide range of military operations at home and abroad. Prior to 2001, Army National Guard units were generally equipped with older equipment and at lower levels than comparable active duty units because it was assumed that they would have considerable warning and training time before deploying overseas. However, senior Army officials have determined that the National Guard’s modular units should be structured like those in the active component and receive similar equipment since the Guard has become an operational force that deploys along with active units. As a result, senior Army officials have stated the Army plans to request additional funds for Army National Guard equipment. In addition, the Army National Guard also has significant domestic missions, and equipment needs for those missions are uncertain. In January 2007 we issued a report on actions needed to address National Guard domestic equipment requirements and readiness. We found that DOD has not worked with the Department of Homeland Security to define National Guard requirements for responding to the 15 catastrophic scenarios developed by the Homeland Security Council. As a result, the equipment requirements and the funding needed to provide equipment for such missions are unknown. Last, when developing its cost estimate for equipping modular units, the Army assumed that significant quantities of equipment would come back from Iraq and be available after some reset and repair work to be distributed to new modular units. Given the heavy use of equipment in Iraq and Afghanistan, this assumption may no longer be valid. The increased demands for equipment used in Iraq operations have had a dramatic effect on equipment availability. This demand reduces expected service life, creates significant repair expenses, and creates uncertainty as to whether it is economically feasible to repair and reset these vehicles. Further, more vehicles currently being operated in theater may be replaced altogether by newer vehicles offering better protection. DOD’s plan to expand the size of the Army by over 74,000 personnel will also add to the Army’s equipment needs. This planned expansion includes building six additional active modular infantry brigade combat teams and some additional modular support units. In January 2007, the Army estimated that this expansion would cost approximately $70.2 billion including personnel, equipment, facilities, and other costs. The equipment portion of this estimate was $17.9 billion. However, in January 2008, we reported that the Army’s overall estimate was not transparent or comprehensive. We also found that certain aspects of the estimate, such as health care costs, may be understated and that some factors that could potentially affect the Army’s funding plan are still evolving. With regard to equipment costs, we could not determine how the Army calculated its procurement estimate because Army budget documents do not identify key assumptions or the steps used to develop the estimate. According to best practices, high-quality cost estimates use repeatable methods that will result in estimates that are comprehensive and can also be easily and clearly traced, replicated, and updated. Given the magnitude of the Army’s funding plan and potential changes to the plan, we recommended that the Secretary of Defense direct the Secretary of the Army to provide Congress with a revised funding plan for expanding the force and adhere to a high quality cost estimating methodology. In February 2008, the Army revised its overall cost estimate for expanding the force to $72.5 billion. According to Army documents, the Army now assumes that $18.5 billion will be needed to procure equipment for combat brigades and support units being created under the Army’s expansion efforts. Finally, in October 2007, the Army also announced a plan to accelerate the expansion implementation timelines for the active Army and Army National Guard from fiscal year 2013 to fiscal year 2010 which will likely further exacerbate equipment shortfalls. The Army has not yet developed a revised funding plan for implementing this acceleration but plans to do so as part of its effort to prepare its fiscal years 2010-2015 budget plan later this year. As a result, it is not clear how the decision to accelerate the expansion effort will affect equipment costs. To improve near-term readiness of nondeployed units, the Army has received substantial funds in recent years to rebuild the force by resetting damaged, and worn equipment and reconstituting its prepositioned equipment sets. However, the Army has not identified the overall requirements for these efforts, and the total cost of these initiatives is uncertain. In addition to procuring new equipment, the Army is working to rebuild the force by resetting its existing equipment to support the ongoing conflicts as well as to equip nondeployed units. Originally, the Army estimated that equipment reset would cost $12 billion to $13 billion per year. Reset costs have grown significantly from about $3.3 billion in fiscal year 2004 to more than $17 billion in fiscal year 2007. Our analysis of Army data shows that the Army is likely to require at least $118.5 billion dollars from fiscal years 2004-2013 (see table 1). The Army has reported that future reset costs will depend on the amount of forces committed, the activity level of those forces, and the amount of destroyed, damaged or excessively worn equipment. As a result, these costs are uncertain. The Army has stated that it will require reset funding for the duration of operations and estimates that it will request reset funding for an additional 2-3 years after operations cease. As operations continue in Iraq and Afghanistan and the Army’s equipment reset requirements increase, the potential for reset costs to significantly increase in future DOD supplemental budget requests also increases. We have also reported that Congress may not have the visibility it needs to exercise effective oversight and to determine if the amount of funding appropriated for equipment reset has been most appropriately used for the purposes intended because the Army was not required to report the obligation and expenditure of funds appropriated for reset in the procurement accounts at a level of detail similar to the level of detail reported in the operation and maintenance accounts. Given the substantial amount of equipment deployed overseas, the uncertain length of operations in Iraq and Afghanistan, and the lack of transparency and accountability, it is unclear how much funding the Army will need to reset its equipment. While Army officials recently told us that they have begun to report procurement obligations and expenditures at a level of detail similar to the level of detail reported for operation and maintenance accounts, officials in the Office of the Secretary of Defense believe that all of the Army’s equipping initiatives, including reset, are part of a larger Army equipping effort and they do not believe that the department needs to track these initiatives separately. We continue to believe that tracking the cost of reset is key to identifying the total cost of the Army equipment plan. In December 2006, the Army decided to remove equipment from its prepositioned sets stored on ships in order to accelerate the creation of two additional brigade combat teams to provide support for ongoing operations. This equipment supplemented equipment prepositioned in Southwest Asia, equipment which has been depleted and reconstituted several times over the course of these operations. It is still unclear when these critical reserve stocks will be reconstituted or how much this will cost; however, the Army has estimated it will require at least $10.6 billion to complete this reconstitution effort through 2013 (see table 1). Army officials stated that prepositioned equipment sets worldwide would be reconstituted in synchronization with the Army’s overall equipping priorities, when properly funded, and in accordance with the Army’s prepositioning strategy, known as the Army Prepositioned Strategy 2015. We recommended in our September 2005 and February 2007 reports that DOD develop a coordinated, department-wide plan and joint doctrine for the department’s prepositioning programs. Synchronizing a DOD-wide strategy with the Army’s prepositioning strategy would ensure that future investments made for the Army’s prepositioning program would properly align with the anticipated DOD-wide strategy. Without a DOD-wide strategy, DOD risks inconsistencies between the Army’s and the other services’ prepositioning strategies, which may result in duplication of efforts and resources. In addition, we could not determine the extent to which the reconstitution of the Army’s prepositioned stocks is reflected in DOD funding requests nor identify the cost estimates for restoring these prepositioned equipment sets. For example, Army officials could not provide a breakdown of the $3.3 billion cost estimate in the fiscal year 2007 supplemental budget request to reconstitute the prepositioned stocks removed from ships. Army officials stated that the estimated cost to fully implement the prepositioning strategy would total somewhere between $10.6 billion and $12.8 billion between fiscal years 2008 and 2013. However, DOD’s funding requests for reconstitution are difficult to evaluate because they may also include funding for other equipment-related funding requests, such as Army modularity, equipment modernization, equipment reset, or requests to fill equipment shortages. Army officials stated that separating prepositioning requirements from other requirements in their funding requests is complicated, and they do not plan to separately track funds set aside for the reconstitution of their prepositioned equipment sets. A common theme in our work has been the need for DOD and the Army to take a more strategic approach to decision making that promotes transparency and ensures that programs and investments are based on sound plans with measurable, realistic goals and time frames, prioritized resource needs, and performance measures to gauge progress. Our prior work has found that a lack of clear linkages between overall Army equipment requirements and funding needs is an impediment to effective oversight of the Army’s equipping plans. Further, transparency of the funds requested for Army equipment is hindered because Army funding needs are scattered across multiple funding requests. Finally, we have suggested a number of actions to enhance transparency and reduce the risks associated with Army equipping initiatives. However, many of these recommendations have not been adopted and, as a result, the Army faces uncertainties going forward. The Army has not clearly linked its overall equipment requirements with funding requests. Our work has shown that major transformation initiatives have a greater chance of success when their funding plans are transparent, analytically based, executable, and link to the initiative’s implementation plans. A lack of linkage between overall Army equipment requirements and funding plans impedes oversight by DOD and congressional decision-makers because it does not provide a means to measure the Army’s progress toward meeting long-term Army equipment goals or to inform decisions that must be made today. Our work on modular restructuring has shown that the Army has substantially revised its timeline for fully equipping units from an original date of 2011 to 2019 but has not provided evidence of its overall equipment requirements or specific plans, milestones, or resources required to fully equip the modular force. Meanwhile, the Army is working to expand its force beyond its original modular restructuring goals, which will lead to billions of additional dollars in requirements to equip new modular units. The Army also does not know if its existing prepositioned equipment requirements reflect actual needs because DOD has not formulated a DOD-wide prepositioning strategy to guide the Army’s prepositioning strategy. Army officials stated that its worldwide prepositioned equipment sets would be reconstituted in synchronization with the Army’s overall equipping priorities and in accordance with its Army Prepositioned Strategy 2015. However, the Army had not established those priorities as of December 2007. Additionally, the Army Prepositioned Strategy 2015 is not correlated with a DOD-wide prepositioning strategy, because, according to DOD officials, a DOD-wide prepositioning strategy does not exist. DOD officials explained that the services are responsible for equipping strategies and that the Joint Staff conducts assessments of the services’ prepositioning programs to determine their relationship within the DOD-wide strategic context. We continue to believe, however, that a DOD-wide strategy is needed in addition to an Army strategy. Finally, the Army’s reset implementation strategy is based on resetting equipment that it expects will be returning in a given fiscal year, and not on targeting shortages of equipment for units preparing for deployment to Iraq and Afghanistan. According to the Army’s Army Force Generation model implementation strategy and reset implementation guidance, the primary goal of reset is to prepare units for deployment and to improve next-to-deploy units’ equipment-on-hand levels. Until the Army’s reset implementation strategy targets shortages of equipment needed to equip units preparing for deployment, the Army will be unable to minimize operational risk by ensuring that the needs of deploying units can be met. Oversight of the Army’s key equipment initiatives has been complicated by multiple funding requests. DOD requested operation and maintenance funds for Army prepositioned equipment in both the fiscal year 2008 annual budget request (about $156 million) and the fiscal year 2008 request related to the Global War on Terror (about $300 million). Army officials stated that there could be some overlap between funds requested for reconstitution of prepositioned equipment in the annual budget request and the reset of prepositioned equipment in the supplemental request. Without integrating the full costs for Army equipment needs in a single budget, decision makers may have difficulty seeing the complete picture of the Army’s funding needs and the potential for trade-offs among competing defense priorities. We have recommended a number of actions intended to improve management controls and enhance transparency of funding requests associated with modular restructuring, force expansion, equipment reset, and prepositioning of equipment stock. However, many of these recommendations have not been adopted because the Army has not developed concrete plans to address the recommendations and in some cases, disagreed with our recommendations. As a result, senior DOD leaders and Congress may not have sufficient information to assess progress and fully evaluate the Army’s funding requests. Our prior reports on the Army’s modular restructuring initiative recommended that the Army improve the transparency of its equipment requirements and funding plans as well as its plan to assess the modular unit designs. In recent years, we recommended the Army develop a comprehensive strategy and funding plan that details the Army’s equipping strategy, compares equipment plans with modular unit designs, identifies total funding needs, and includes a mechanism for measuring progress in staffing and equipping its modular units. We have also recommended that the Army develop a comprehensive assessment plan that includes steps to evaluate modular units in full-spectrum combat. In January 2008, we recommended that DOD provide Congress with additional information on the Army’s expansion initiative, including an updated funding plan and that the Army maintain a transparent audit trail including documentation of the steps used to develop its expansion funding plan. We have also made recommendations intended to address short and long- term operational risks associated with Army equipment reset and prepositioning strategies. Regarding the Army’s equipment reset plans, we recommended in September 2007 that the Army ensure that its priorities address equipment shortages in the near term to minimize operational risk and ensure that the needs of units preparing for deployment can be met. Finally, with regard to prepositioned equipment, we recommended the establishment of a DOD-wide prepositioning strategy to ensure that future Army prepositioning investments are aligned with DOD’s prepositioning goals. We continue to believe that our recommendations have merit, though many of these recommendations have not been adopted and, as a result, the Army faces uncertainties going forward. Restoring equipment readiness across the Army will require billions of dollars in maintenance and procurement funding but the full cost—and how long it will take—are still unclear. The uncertainty about the magnitude and duration of our military commitments further complicates and deepens the equipping challenges facing the Army. Moreover, growing fiscal problems facing the nation may lead to growing pressure on defense budgets. Such uncertainty about the future underscores the need for sound management approaches like setting goals, establishing clear measures to track progress, and identifying full costs. Until these steps are taken, decision makers will lack key information needed to gauge interim progress and make informed choices aimed at balancing the need to restore near-term readiness while positioning the Army for the future. Mr. Chairman and Members of the Committee, this concludes my statement. I would be pleased to respond to any question you or other Members of the Committee or Subcommittee may have. For questions regarding this testimony, please call Janet A. St. Laurent at (202) 512-4402 or [email protected]. Key contributors to this testimony were John Pendleton, Director; Wendy Jaffe, Assistant Director; Kelly Baumgartner; Grace Coleman; Barbara Gannon; David Hubbell; Kevin O’Neill; Steve Rabinowitz; Terry Richardson; Donna Rogers; Kathryn Smith; Karen Thornton; and Suzanne Wren. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. The Nation’s Long-Term Fiscal Outlook: January 2008 Update. GAO-08- 591R. Washington, D.C.: March 21, 2008. Military Readiness: Impact of Current Operations and Actions Needed to Rebuild Readiness of U.S. Ground Forces. GAO-08-497T. Washington, D.C.: February 14, 2008. Defense Logistics: Army Has Not Fully Planned or Budgeted for the Reconstitution of Its Afloat Prepositioned Stocks. GAO-08-257R. Washington, D.C.: February 8, 2008. Force Structure: Need for Greater Transparency for the Army’s Grow the Force Initiative Funding Plan. GAO-08-354R. Washington, D.C.: January 18, 2008. Force Structure: Better Management Controls Are Needed to Oversee the Army’s Modular Force and Expansion Initiatives and Improve Accountability for Results. GAO-08-145. Washington, D.C.: December 14, 2007. Defense Logistics: Army and Marine Corps Cannot Be Assured That Equipment Reset Strategies Will Sustain Equipment Availability While Meeting Ongoing Operational Requirements. GAO-07-814. Washington, D.C.: September 19, 2007. Military Training: Actions Needed to More Fully Develop the Army’s Strategy for Training Modular Brigades and Address Implementation Challenges. GAO-07-936. Washington, D.C.: August 6, 2007. Defense Logistics: Improved Oversight and Increased Coordination Needed to Ensure Viability of the Army’s Prepositioning Strategy. GAO- 07-144. Washington, D.C.: February 15, 2007. Defense Logistics: Preliminary Observations on the Army’s Implementation of Its Equipment Reset Strategies. GAO-07-439T. Washington, D.C.: January 31, 2007. Reserve Forces: Actions Needed to Identify National Guard Domestic Equipment Requirements and Readiness. GAO-07-60. Washington, D.C.: January 26, 2007. Force Structure: Army Needs to Provide DOD and Congress More Visibility Regarding Modular Force Capabilities and Implementation Plans. GAO-06-745. Washington, D.C.: September 6, 2006. Force Structure: Capabilities and Cost of Army Modular Force Remain Uncertain. GAO-06-548T. Washington, D.C.: April 4, 2006. Defense Logistics: Preliminary Observations on Equipment Reset Challenges and Issues for the Army and Marine Corps. GAO-06-604T. Washington, D.C.: March 30, 2006. Reserve Forces: Plans Needed to Improve Army National Guard Equipment Readiness and Better Integrate Guard into Army Force Transformation Initiatives. GAO-06-111. Washington, D.C.: October, 4 2005. Force Structure: Actions Needed to Improve Estimates and Oversight of Costs for Transforming Army to a Modular Force. GAO-05-926. Washington, D.C.: September 29, 2005. Defense Logistics: Better Management and Oversight of Prepositioning Programs Needed to Reduce Risk and Improve Future Programs. GAO- 05-427. Washington, D.C.: September 6, 2005. Defense Management: Processes to Estimate and Track Equipment Reconstitution Costs Can Be Improved. GAO-05-293. Washington, D.C.: May 5, 2005. Force Structure: Preliminary Observations on Army Plans to Implement and Fund Modular Forces. GAO-05-443T. Washington, D.C.: March 16, 2005. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The high pace of overseas operations is taking a heavy toll on Army equipment. Harsh combat and environmental conditions over sustained periods of time have exacerbated equipment repair, replacement, and recapitalization problems. The Army has also taken steps to restructure its forces before implementing its longer term transformation to the Future Combat System. To support ongoing operations and prepare for the future, the Army has embarked on four key initiatives: (1) restructuring from a division-based force to a modular brigade-based force, (2) expanding the Army by adding about 74,000 people and creating new units, (3) repairing, replacing, and recapitalizing new equipment through its reset program, and (4) replacing equipment borrowed from its pre-positioned equipment sets around the world. Since 2004, Congress has provided billions of dollars to support the Army's equipping needs. GAO has issued many reports on the Army's efforts to equip modular units, expand the Army, reset equipment, and manage and replace prepositioned equipment. This statement, which draws largely on these reports, will address (1) the equipment-related cost of these initiatives, and (2) the management challenges facing the Army and the actions needed to improve its implementation of these initiatives. GAO is issuing a separate statement today on the Future Combat System (GAO-08- 638T). Restructuring and rebuilding the Army will require billions of dollars for equipment and take years to complete; however, the total cost is uncertain. Based on GAO's analysis of Army cost estimates and cost data, it appears that the Army's plans to equip modular units, expand the force, reset equipment, and replace prepositioned equipment are likely to cost at least $190 billion dollars through fiscal year 2013. However, these estimates have some limitations and could change. Further, the Army has stated it plans to request additional funds to address equipment shortfalls in modular units through fiscal year 2017. Several factors are contributing to the uncertainties about future costs. First, the Army's $43.6 funding plan for equipping modular units was based on preliminary modular unit designs and did not fully consider the needs of National Guard units. Second, the Army expects to need $18.5 billion for equipment to expand the force but has not clearly documented this estimate. Third, costs to reset equipment may total at least $118 billion from fiscal years 2004-2013 but may change because they are dependent on how much equipment is lost, damaged, or worn beyond repair during continuing operations in Iraq and Afghanistan and how long these operations continue. Fourth, the Army believes it will need at least $10.6 billion to replace pre-positioned equipment that was taken out of storage to support ongoing operations, but this amount is an estimate and DOD's overall strategy for prepositioned equipment has not yet been issued Given the magnitude of these initiatives and potential for costs to change, DOD will need to carefully monitor the projected costs of these initiatives so that it can consider tradeoffs and allocate funding to balance the Army's equipping needs for the next decade and longer term transformation goals. A common theme in GAO's work has been the need for DOD and the Army to take a more strategic approach to decision making that promotes transparency and ensures that programs and investments are based on sound plans with measurable, realistic goals and time frames, prioritized resource needs, and performance measures to gauge progress. GAO's work on modular restructuring has shown a lack of linkage between the Army's funding requests and equipment requirements. This lack of linkage impedes oversight by DOD and Congress because it does not provide a means to measure the Army's progress in meeting modular force equipment requirements or inform budget decisions. Oversight of Army initiatives has also been complicated by multiple funding requests that makes it difficult for decision makers to understand the Army's full funding needs. GAO has recommended a number of actions to improve management controls and enhance transparency of the Army's plans for equipping modular units, expanding the force, resetting equipment, and replacing prepositioned equipment. However, many of these recommendations have not been fully implemented or adopted. For example, until the Army provides a comprehensive plan for its modular restructuring and expansion initiatives, which identifying progress and total costs, decision makers may not have sufficient information to assess progress and allocate defense resources among competing priorities. |
Agencies are responsible for managing their vehicle fleets in a manner that allows them to fulfill their missions and meet various federal requirements. For example, agencies must determine the number and type of vehicles they need and how to acquire them, including whether to own or lease them. Various statutes, executive orders, and policy initiatives direct federal agencies to, among other things, collect and analyze data on costs, reduce fuel consumption, and eliminate non- essential vehicles. In addition, GSA has issued federal fleet-management regulations that include requirements regarding agencies’ fleet- management information systems, vehicle fuel efficiency, and vehicle utilization, among other things. GSA has also issued guidance to help agencies manage their fleets effectively and meet federal requirements, including guidance on assessing vehicle needs, using alternative fuel vehicles, and potential cost-saving techniques. Federal agencies may approach GSA to lease some or all of the vehicles they determine necessary to meet their mission and program needs. Supported by a network of regional Fleet Management Centers, GSA manages the federal government’s vehicle-leasing program (called GSA Fleet), which leases vehicles to over 75 federal agencies. The size of the federal leased fleet ranged from about 195,000 vehicles in fiscal year 2008 to about 199,000 vehicles in fiscal year 2011, but declined to about 190,000 vehicles in fiscal year 2012.information on the size of the leased fleet. GSA’s leasing rates, terms, and services help agencies keep fleet costs down in a variety of ways. For example, GSA procures the vehicles it leases at a discount and passes those savings on to its customers, provides agencies with data analyses that can be used to eliminate unnecessary vehicles, and identifies fraud, waste, and abuse related to leased vehicles. However, we identified two areas where GSA’s rates and terms have not encouraged agency efforts to reduce fleet costs. First, GSA’s monthly mileage rate, which covers agency fuel costs, does not provide incentives for agencies to reduce some fuel costs, such as costs associated with idling. Second, lack of clear GSA guidance on what constitutes excessive wear and tear of leased vehicles can limit the ability of agencies to determine whether it is less expensive to lease or own vehicles. GSA is currently taking steps to develop such guidance. GSA’s leasing rates, terms, and services help agencies minimize fleet costs in various ways, as discussed below. GSA officials and our panels of civilian and military federal fleet managers told us that GSA’s vehicle lease rates are lower, for the most part, than the commercial sector and provide a more economical choice for federal agencies., Although some agencies may choose to lease from commercial vendors, only about 3 percent of federally leased vehicles, many of which are not offered by GSA—such as utility trucks with cranes and luxury executive vehicles—are leased through the commercial sector. According to GSA officials, the agency keeps leasing rates low by minimizing vehicle acquisition costs, maximizing resale values, and not having to make a profit. They pointed out that the agency has the ability to buy vehicles at a discount, at prices that average 17 percent below invoice, because it buys in volume from manufacturers, about 50,000 vehicles annually. GSA is then able to pass these savings on to its customers and eventually resells the vehicles at a point when their resale value is still high. According to GSA officials, the agency’s vehicle maintenance program also contributes to its low lease rates by ensuring that vehicles are maintained in good condition, decreasing the need for costly maintenance and repair. GSA’s vehicle lease terms can help keep down the cost of leasing to agencies. According to GSA officials, its vehicle lease terms, which include coverage of routine maintenance and repair, help ensure that vehicles receive proper maintenance and repair and encourage agencies to take care of their leased vehicles. According to GSA officials, ensuring vehicles received proper maintenance would be more difficult if GSA left it up to the leasing agencies. GSA officials believe these terms maximize the resale value of the vehicles for GSA, which, as noted, can help to In addition, GSA offers short-term keep overall leasing costs down.vehicle rentals for up to 120 days. Such leasing arrangements allow agencies to meet short-term vehicle needs rather than lease vehicles for longer periods when not needed or rent them from commercial vendors, which can be more expensive. Short-term rentals are commonly used for special events, such as conventions, or seasonal needs. Federal fleet managers told us that they found GSA’s analysis of data on their leased fleet, made possible through GSA’s fleet card, to be helpful in identifying underutilized leased vehicles within their fleet that can be disposed of or shared. For example, GSA officials told us that if the agency identified two underused vehicles in the same location travelling 5,000 miles annually, when the performance measure for full vehicle usage for each vehicle was 10,000 miles, it would suggest that the agency consider eliminating one of these vehicles. Some federal fleet managers also noted that GSA fleet data analysis helps agencies identify when inefficient driving practices may be occurring, particularly related to fuel purchases, within their fleets. According to GSA officials, GSA’s fleet service representatives analyze fuel use data to identify when vehicles record low miles per gallon, which may indicate that a vehicle idles too much or that vehicle has an engine problem, and works with the agency to resolve any issues found. According to one fleet manager on our military panel, GSA identified excessive idling in the agency’s fleet and worked collaboratively to curb it. GSA’s Loss Prevention Team is a group within GSA Fleet whose mission is to prevent misuse and abuse within GSA’s vehicle leasing program. The Loss Prevention Team has a memorandum of understanding with the GSA Office of Inspector General (OIG) that specifies services, such as coordinating the initiation of investigations that the OIG is to provide for fleet charge card cases. GSA’s OIG is an independent unit that is responsible for promoting economy, efficiency, and effectiveness and detecting and preventing fraud, waste, and mismanagement in GSA’s programs and operations. years 2009 to 2012 ranged widely from $66 to $299,000. The larger amount involved a case in which an individual was found to have stolen and used multiple GSA fleet cards to purchase and then resell gasoline. In partnership with the Department of Justice, GSA seeks the prosecution of individuals believed to have committed fraud and seeks to recoup the money. In addition to the identification of fraud, GSA’s leasing services can help with reducing the costs of accident management, according to some federal fleet managers. For example, one federal fleet manager noted that GSA’s management of the fault resolution process when government vehicles are involved in accidents with private vehicles helps reduce costs that agencies incur from accidents. GSA seeks to ensure that, when government drivers are not at fault, the party responsible for the accident reimburses the federal government. GSA’s vehicle maintenance program also helps reduce agency fleet costs, according to federal fleet manager panelists. GSA has national agreements with major maintenance and tire companies to provide discounted maintenance services and vehicle parts. A GSA automotive technician, who is responsible for ensuring that the repairs are necessary and appropriately priced, must validate all repairs over $100. According to GSA officials, its overall management of the leasing program—including its approaches for acquiring, maintaining, and replacing vehicles and the various services it offers to its customer agencies—provides economies of scale and a “unified way of conducting business” that ultimately reduces costs. For example, according to these officials, their centralized management of the leased fleet provides an enhanced ability to detect waste, fraud, and abuse related to leased vehicles and helps prevent duplicative fleet management operations in federal agencies that can be more costly. Under GSA’s leasing rate structure, the monthly mileage fee charged to agencies covers fuel costs, as well as other variable costs, such as those for vehicle maintenance. A customer agency’s mileage fee, which is determined by the miles its leased vehicles travel and GSA’s mileage rate per category of vehicle leased, may not fully reflect some fuel costs not associated with miles traveled. These include costs associated with some driver behaviors such as idling, speeding, and fast stops and starts. GSA bases its mileage rate partly on the average cost of fuel per mile across all agencies for each category of vehicle available for leasing. According to GSA officials, the rate is designed to cover the leasing program’s overall variable costs, which GSA pays for, and is a good approximation of these costs. The fee each agency pays does not necessarily reflect the fuel it actually uses, however, as the rate is not designed to capture individual agencies’ fuel costs. Specifically, drivers of vehicles leased by some agencies may engage in behaviors such as idling, speeding, and fast stops and starts—which increase fuel use—to a greater extent than drivers in some other agencies, but all agencies would pay the same rate per mile for each category of vehicle leased. For example, according to Air Force officials, GSA identified excessive idling in leased vehicles at the Dover Air Force base and worked with the Air Force to curb it. In addition, vehicles used by DHS’s Customs and Border Patrol (CBP) in the desert on the southern border of the United States may need to idle often to keep the occupants of the vehicle cool during hot days. Yet with GSA’s monthly mileage rate, CBP generally pays based on the number of miles traveled, not the actual amount of fuel consumed by idling. According to GSA officials, the agency occasionally adds a surcharge to agency monthly mileage rates for excessive idling, which GSA evaluates on a month-by-month basis. GSA officials have acknowledged that its mileage rate does not capture some fuel costs at the customer agency level, such as those associated with each agency’s idling or speeding. The fuel costs of GSA’s leasing program are significant; they totaled about $431 million in fiscal year 2012. GSA has identified reducing the use of resources and environmental impacts as an agency goal, but its monthly mileage rate structure does not provide agencies with incentives to reduce the types of fuel use, cited above, that are not reflected in distance traveled. According to economic principles, when the price paid for a good does not reflect the full costs of that good, consumers will tend to use higher amounts of the good than is optimal from a societal standpoint. Therefore, under the current leasing rate structure in which some agencies may not bear the full cost of their fuel consumption, agencies may seek to have the government provide fuel levels that are economically inefficient. Also, as discussed later in this report, agencies may lack incentives to adopt telematics, which could lead to savings in fuel costs and other costs under certain circumstances, since their monthly leasing fees may not fully reflect any cost savings they achieve. Principles for designing government fees suggest that having each agency pay for the fuel it actually uses could foster greater efficiency by increasing 1) awareness of the costs of fuel and 2) incentives to reduce fuel costs not reflected in miles traveled. Some federal fleet managers on our panels acknowledged that paying for their own fuel might provide more of an incentive to reduce fuel use in leased vehicles. However, GSA officials and some panelists stated that they preferred the current structure. According to GSA officials and these panelists, including fuel costs as part of the mileage rate aids the customer agency’s budgeting because GSA assumes the risk of fuel price increases, allowing agencies to reduce uncertainties in managing fleet costs. GSA sets its mileage rate at the beginning of the year based on what it estimates fuel will cost over the course of the year. GSA, not the agencies, generally bears the cost burden of the increase in fuel prices. According to GSA officials, the agency imposes a surcharge on agencies if fuel prices rise to such an extent that GSA believes it cannot absorb the unanticipated level of costs and also issues a rate reduction when fuel prices significantly decline in a given fiscal year. In addition, GSA officials noted that its coverage of fuel costs reduces the fleet- management administrative burden on agencies and prevents duplication of management effort on the part of GSA and agencies. GSA officials also cited reasons that, in their view, changes in the rate structure may not be needed or may not lead to reduced fuel costs. According to these officials, improving agency incentives for reducing fuel use is not needed because agencies are legally required to reduce fuel costs. using data on agency fuel purchases to identify when fuel use is well above expected levels and then take appropriate actions, including adding a fuel surcharge or determining if excessive fuel use is due to fraud, waste, or abuse. These officials noted, however, that the addition of a fuel surcharge has been an infrequent occurrence. Executive Order 13514, issued in 2009, directs federal agencies operating a fleet of at least 20 motor vehicles to reduce petroleum consumption by a minimum of 2 percent annually through the end of 2020, from a 2005 baseline. In addition, the Energy Independence and Security Act of 2007 requires federal agencies to achieve at least a 20 percent reduction in annual petroleum consumption by 2015 based on a 2005 baseline. Pub. L. No. 110-140, § 142; Exec. Order No. 13514, 72 Fed. Reg. 3919. conditions, could help agencies identify and reduce driver behaviors that cause excessive fuel use, but agencies do face some challenges in adopting these technologies. We have not fully evaluated the pros and cons of changing GSA’s rates so that agencies pay for the fuel they actually consume, and according to GSA officials, no studies have been performed on its leasing rate structure. In a May 2008 report, we found that there are trade-offs to consider in designing government fees and that every design will have pluses and minuses. In addition to efficiency, which we have discussed, we found that considerations in designing fees include equity (meaning that everyone pays a fair share), the extent to which collections cover costs, and the administrative burden on agencies. While GSA has flexibility in administering its rate structure, GSA’s current leasing rate structure may not be fully equitable, as agencies that are more efficiently using fuel are to some extent subsidizing agencies that are less efficient because all agencies are charged the same mileage rate per category of vehicle. While GSA is required to collect adequate fees to cover the costs of its leasing program, the extent of the administrative burden for GSA and its customers of the current rate structure versus one in which agencies pay for their actual fuel costs is unclear and would depend on how any changes were implemented. Nevertheless, under the current rate structure, some excessive fuel use due to driver behaviors such as idling and speeding may be occurring, resulting in higher costs to taxpayers than would be the case if agencies paid for actual fuel consumed and therefore had increased incentives to minimize fuel use. Federal fleet managers on both our civilian and military panels told us that they sometimes receive large unexpected charges, as much as thousands of dollars, from GSA during vehicle lease terms for damage done to vehicles beyond normal wear and tear. In fiscal year 2012, GSA issued damage charges for excessive wear and tear to 40,802 federal vehicles, or about 21 percent of its leased fleet, totaling about $18.5 million. For vehicles charged an excessive damage bill, the average bill was about $453 per vehicle. The highest bill was $10,400, according to GSA. Fleet managers told us that these charges could considerably increase the cost of managing a leased fleet. According to some of our panelists, lack of a clear GSA policy or guidance defining excessive wear and tear limits an agency’s ability to decide whether it is more economical to lease or own vehicles. Without this information, agencies may be hindered in keeping overall fleet costs down because it is more difficult to estimate life-cycle costs for leased vehicles, estimates that serve as a basis for agencies making decisions about whether to lease or own a vehicle. Fleet managers told us that had they known that certain wear and tear would result in post-lease charges, they would choose to own the vehicle rather than lease it through GSA because ownership would have had lower life-cycle costs. Our past work has found and GSA’s fleet management guidance states that life-cycle cost analysis is an important practice to help manage fleet costs and determine whether to purchase or lease vehicles. Furthermore, federal fleet managers on both panels told us that what constitutes excessive wear and tear is often interpreted differently by GSA’s local fleet service representatives, who are responsible for making these determinations when agencies turn in vehicles. Federal fleet managers on the military panel proposed that GSA develop a policy that would clarify and standardize the definition of excessive wear and tear, making it less subject to interpretation by regional fleet service representatives. Appropriate policies are a useful internal control for agencies to help ensure that decisions and practices are applied consistently. During most of our review, GSA policies and guidance for its vehicle- leasing program did not include a definition of excessive wear and tear. GSA officials told us that the concept of normal wear and tear is discussed internally during ongoing regional and headquarters meetings, such as fleet service representative and Federal Fleet Policy Council meetings, to aid in delivering consistent practices. In addition, GSA explained that GSA publications such as the Guide to Your GSA Fleet Vehicle serve as reference material for agencies as well as fleet service representatives, but a definition of excessive wear and tear is not provided in this guide. In March 2014, GSA completed development of guidance for fleet service representatives containing details on what constitutes normal and excessive wear and tear to leased vehicles. According to GSA officials, the agency will provide internal training as well as guidance to its customers on this issue through the spring and summer of 2014. The experts we consulted agreed that in some cases, telematics could facilitate cost savings by providing fleet managers with information needed to reduce fleet size, fuel use, misuse of vehicles, and unnecessary maintenance. Federal fleet managers on our two panels agreed that telematics can produce cost savings under certain circumstances and that GSA should do more to support telematics use, including lowering costs of telematic devices and providing information on agencies’ experiences in using telematics in their fleets. GSA is taking steps to reduce telematics’ costs, but does not currently collect and share information about agencies’ experiences with telematics. According to all of the experts we consulted, telematics have the potential, under certain circumstances, to provide cost savings to vehicle fleets. The experts identified various areas in which fleet managers can achieve cost savings, including fleet utilization, fuel use, misuse of vehicles, and maintenance (see fig. 1). Fleet managers can achieve savings by analyzing the data provided by telematics devices and taking actions to reduce costs based on those data. For example, managers can reduce fleet size by eliminating vehicles with insufficient use and provide feedback to drivers to reduce wasteful, abusive, or dangerous behaviors such as speeding or unauthorized personal use. Fleet managers can also tailor vehicle maintenance based on improved knowledge of the vehicle’s actual condition and avoid unnecessary preventative maintenance. One expert, who is a fleet manager, reported that telematics helped him reduce his fuel costs by 8 to 15 percent among sections of his fleet with almost universal telematics installation, though he cautioned that these vehicles received telematics because they were the most likely to achieve savings. Another expert reported that telematics helped him reduce his fleet size by 7 percent over 60 months among the vehicles with telematics installed. (See sidebars for additional information on the experiences with telematics of selected experts who manage fleets). Experts cautioned that it is not always possible to calculate a comprehensive return on investment. Experts told us that it can be challenging to quantify cost savings when a comparative baseline is not available, telematics are part of a larger improvement effort, or the type of savings are difficult to quantify financially (such as savings associated with safety improvements). For example, one expert noted that he was unable to calculate fuel savings because the devices had been installed on new plug-in hybrid vehicles; he was unable to differentiate between the fuel savings achieved by using plug-in hybrids and the fuel savings from actions taken in response to telematics data. Furthermore, experts also noted that the potential return on investment from the adoption of any telematic technology will vary and that telematics will not achieve cost savings for every fleet. For example, two experts explained that telematics would not provide a return on investment for their own fleets because of how their vehicles are used. One noted that the vehicles are not used on a daily basis, so the benefits would not justify the costs. In addition, two experts explained that employees at their respective companies are authorized to use the vehicles for personal use, so after-market tracking devices would likely face opposition because of privacy concerns. Experts also noted that telematics can be a legal liability if information is gathered but not acted upon. For example, if telematics data shows that a driver regularly speeds but no corrective action is taken to stop this behavior, the employer may have a greater liability risk in the event that the driver is involved in an accident, according to one expert. The experts we interviewed highlighted four key factors that influence telematics’ potential to facilitate cost savings in vehicle fleets: Cost of the technology selected: The experts emphasized that the cost of any telematics program must be accounted for when considering overall cost savings. As a result, costs must be carefully evaluated in comparison to projected savings to avoid a net loss. This comparison can be challenging because the term “telematics” encompasses a broad array of technologies, which results in a wide range of associated costs. For example, telematics can include original equipment installed by the manufacturer, after-market add-on systems, or mobile device applications and programs. Further, data can be transmitted via satellite or cellular connections on a regular basis or when a vehicle passes a fixed-data download station. Fixed download stations pose mostly upfront, fixed costs, whereas the cost for a satellite connection is typically levied in ongoing monthly data charges. In addition, fleets may rent telematic devices for a short period of time to obtain a snapshot of usage data, or may select a long-term contract for ongoing monitoring. Various combinations of device, data access, and contract type will have different costs, which in turn influences the potential return on investment. Fleet characteristics: Experts reported that the characteristics of the fleet also affect the return on investment. For example, fleets that idle frequently will have more opportunity for fuel savings than fleets with carefully controlled fuel consumption. In addition, the number of miles driven may influence how much fuel can be saved. Experts also emphasized that the technology must be aligned with the fleet’s characteristics, or the likelihood for savings will be reduced or eliminated. For example, some kinds of telematic devices depend on satellite signals that can be impaired by tall buildings in urban areas. Other devices depend on wireless connectivity that may be limited in rural locations. Still others rely on all vehicles in a fleet returning to or passing by a central location on a regular basis. If data are received sporadically, fleet managers will have less detailed information on which to act, which reduces the potential for cost savings. More information on the fleet characteristics that experts noted could influence the cost savings potential of telematics can be found in appendix IV. Management and organizational support: The experts we consulted reported that upper management support, fleet managers’ buy-in, and organizational culture will influence the degree to which telematics can facilitate cost savings, since these factors can either support or hinder the cost-savings actions taken in response to telematics data. The experts said that upper management support is necessary to secure funding, change policies in response to problems identified through telematics data, and ensure that corrective actions are taken in a timely manner. Moreover, a fleet manager will need to have the time, ability, and desire to conduct analyses of the data to understand what changes are needed, unless the telematic device includes analytical support. In addition, some organizations may have cultures and structures that either embrace or reject monitoring efforts. For example, one expert noted that some unions support monitoring because of the safety benefits and liability protection, while other unions resist monitoring to prevent disciplinary actions against their members. Information technology systems: Experts also highlighted the importance of information technology systems that can efficiently collect and distribute the data provided by telematics devices. They noted that cost- saving changes can be more effectively implemented when the data gathered by telematics are readily accessible and integrated with all relevant information systems. For example, if a fleet uses multiple telematic-service providers to address different aspects of the fleet, then the overall visibility will be compromised without an integrated platform. The federal fleet managers on our two panels agreed that the use of telematics has the potential to reduce costs in the federally leased fleet.While GSA currently provides leasing customers with various types of information, such as information on fuel use and potential fraud, based on data collected through its fleet payment card, fleet managers told us that telematics can provide information that is more detailed. In addition, telematics may also be able to reduce administrative costs, such as the cost of personnel to perform manual vehicle data collection. The majority of panelists’ fleets had at least some experience with telematics, and a few recently initiated or completed studies on or estimates of the outcomes of telematics use. For example, according to federal fleet managers with whom we spoke: The Air Force has installed a telematic device, designed to reduce unaccounted-for fuel loss, on approximately 30,000 vehicles at 171 installations. While the Air Force predicts full system activation is one year out, an initial cost savings analysis will be conducted using three test sites in the summer of 2014. In addition to improving fuel accountability, telematics may also reduce the manpower required to conduct periodic vehicle and equipment inventories. The Department of Veterans Affairs regularly uses telematics in some vehicles and has realized some cost savings but found the return on The investment to be better on some types of vehicles than others.agency plans to equip most vehicles with telematics by the end of 2016. The Department of Energy has used telematics in some of its vehicles for approximately 5 years, and this use has led to savings in all of the For example, a fleet manager cost categories previously discussed. at Idaho National Laboratory reported that telematics data have helped inform decisions to eliminate 65 vehicles since fiscal year 2011, with an estimated average annual savings of approximately $390,000 (including the cost of telematics on the remaining vehicles). Some Marine Corps bases and recruiting districts regularly use telematics. In a separate interview from the panel discussions, a Marine Corps fleet manager stated that he believed telematics’ use at seven installations in the southwestern United States improved safety and helped defend Marines against fraudulent or erroneous accident claims. He stated that he believes telematics has been the single most effective tool for reducing vehicle-operating, maintenance, and abuse costs, but that no formal analysis has been conducted on the cost savings. Federal fleet managers also agreed that the previously discussed factors (telematics’ costs, fleet characteristics, management and organizational support, and information technology systems) influence telematics’ cost- saving potential for the federal fleet. They noted that cost, in particular, impedes further federal adoption of telematic devices. For some fleet managers the initial cost was the greatest financial concern, and for others, it was the rate of ongoing, monthly charges. A few fleet managers observed that because fuel and some other costs are included in GSA’s vehicle-leasing rate, it is more challenging for an agency to recoup the costs of using telematics in leased vehicles under the current rate structure. As discussed previously, GSA’s monthly mileage rate covers fuel costs as well as other variable costs of an agency’s leased fleet, based on the average cost of fuel and maintenance in each vehicle category. Therefore, agencies’ individual fees would not necessarily reflect all of the cost savings they achieve from telematics. Fleet managers from two agencies also noted that this reduces agencies’ incentives to use telematics. In addition, federal fleet managers on our panels told us that lack of upper management support sometimes poses challenges for federal agencies in adopting telematics. They said that the reasons for this lack of support include: The potential savings from implementation of telematics can be minor in comparison with agency budgets and are not seen as a priority for agency leadership. Funds are limited and investments in other areas may be viewed as providing a better return. Upper management officials are wary about making an investment in telematics if benefits may be challenging to quantify in financial terms or if there is no known precedent in other agencies. Given the potential of telematics to facilitate cost-saving decisions and the concerns about cost and management support, the panels of federal fleet managers proposed some changes, discussed below, that GSA could make to enhance agencies’ abilities to use telematics in their federally leased vehicles. Lower the cost of telematic devices: Both panels of federal fleet managers proposed that GSA lower the cost of telematic devices to improve the likelihood of achieving a cost-effective solution and to help allay management concerns about cost and return on investment. GSA Fleet offers agencies the option of having selected vendors install telematics devices in their leased vehicles. GSA negotiates discounts for these devices through an agreement it has established with vendors. However, federal fleet managers told us that the prices are still a barrier to increasing telematics’ use in federal fleets. The pricing of telematics devices and service plans available varies depending on many factors, including the desired capabilities, the quantity ordered, and the length of time the technology is required. For example, according to a GSA informational brochure, monthly costs for basic GPS tracking typically range from $22.50 to $32.50 per vehicle, or $810 to $1,170 for a 3-year contract, though agencies can negotiate to obtain lower prices.agencies’ leased fleets can include thousands of vehicles, the total costs of using telematics can be significant. One of GSA’s priorities, as stated in its 2012 Annual Performance Report and reiterated in a 2013 memo from the Administrator to all GSA staff, is to use the purchasing power of the federal government to drive down prices, deliver better value, and reduce costs to their customer agencies. Agency officials explained that GSA has not pursued additional discounts for telematics because federal agencies have only recently begun to pursue the technology in significant quantity. GSA is currently engaged in efforts to secure new contracts for telematics devices for customers and hopes to have these available by the end of fiscal year 2014. As part of this effort, they are seeking to provide these devices at a lower cost to customers. Provide information on federal agencies’ experiences with telematics: Both panels noted that it would be helpful if GSA were to collect information on federal agencies’ experiences in using telematics in their fleets and share this information. Based on information provided by GSA officials, 12 of the 15 executive branch departments as well as some independent agencies, such as the Environmental Protection Agency and the National Aeronautics and Space Administration, acquired telematics for some portion of their fleets (owned or leased) between 2008 and 2012. GSA’s Office of Government-wide Policy published an article relating the Marine Corps’ experiences with telematics in its fleet in 2006. However, GSA has not compiled information on agencies’ recent telematic efforts and therefore is unable to provide such information to agencies. Although GSA has not provided such information, it does regularly communicate with agencies regarding fleet management topics through various means, including providing information on its website and presenting webinars, among other approaches. Federal fleet managers agreed that knowing more about the experiences of other fleets would help them better understand how telematics might be applied in their own fleets and could be used to bolster support from upper management, program managers, and drivers. Such information could include descriptions of federal agencies’ efforts as well as studies or other information on results of these efforts, including estimates of cost savings achieved or nonfinancial benefits, such as enhanced program performance or improved safety and liability protection for employees. It could also include lessons learned from the experiences. One fleet manager on our military panel noted that the military services’ fleet managers already share their experiences with each other, and that such information sharing, facilitated by GSA, would also be beneficial for civilian agencies. Another panelist suggested that the studies performed by other federal fleets would be more credible than the studies provided by telematics vendors. GSA officials explained that while they do not currently collect this information, GSA’s Office of Government-wide Policy would be able to request information from agencies and share information that agencies voluntarily provided. GAO has found that a key factor in helping agencies better achieve their missions and program results is through appropriate internal controls, which include relevant, reliable, and timely communications. In addition to internal communications, this includes communicating with, and obtaining information from, external stakeholders that may have an impact on the agency achieving its objectives. GSA is pursuing several strategic objectives that would be better supported by obtaining and sharing additional information about telematics with other federal agencies. One such objective is for GSA to enhance relationships with its customers, in part by improving customer knowledge, and sharing information that drives improved decision-making and performance in the fleet policy area. Another such objective is to help provide savings to federal agencies, including by providing them with information that can be useful in reducing fuel use. GSA has noted in strategic planning documents and during interviews that it strives to ensure that customers receive assistance that meets client needs and strives for a culture of continuous improvement. Without information, facilitated by GSA, about other agencies’ experiences with telematics, agencies may expend additional time and resources to find such information and identify devices that would best meet their needs and may encounter problems that could have been avoided. In addition, they may not be able to gather the internal support needed to start or increase the use of telematics in their leased fleets. Given the amount that federal agencies pay GSA to lease vehicles— over $1.1 billion in fiscal year 2012—and concerns by Congress and the Administration about costs associated with federal agencies’ fleets, it is important for GSA to ensure that it is operating its leasing program in a manner that encourages agencies to minimize costs associated with their leased vehicles. While various aspects of GSA’s leasing rates, structures, and services support agency efforts to keep costs down, its current leasing rate structure does not provide incentives for agencies to take actions to reduce some types of fuel costs associated with poor driving behavior. Without an examination of the trade-offs of changing this rate structure so that agencies pay for the fuel they actually consume, GSA may be missing an opportunity to encourage agencies to minimize fuel costs and save taxpayer dollars or ensure that its leasing rate structure is the most appropriate one. While telematic devices are not cost-effective for every vehicle fleet, under certain circumstances they could produce cost savings in fleets leased from GSA. GSA, through its existing resources and expertise, is well positioned to facilitate agencies’ adoption of telematics by offering these technologies to agencies at a reduced cost and by asking agencies to voluntarily provide information about their experiences with telematics that it can share. GSA is currently seeking to reduce prices for telematics. By providing information on its website or through other methods on federal agencies’ experiences in using telematics in their fleets, such as information on agencies’ telematic efforts or studies or estimates of pilot or program results, GSA could help agencies better identify the circumstances under which devices or approaches might or might not achieve cost savings. Such information could also help agencies obtain support from upper management for telematics adoption or improve their existing telematics programs. To help reduce costs associated with vehicles leased from GSA, we recommend that the Administrator of GSA take the following two actions: 1. examine and document the trade-offs of changing GSA’s vehicle leasing rate structure so that each agency pays for the fuel that it actually uses, and 2. request information from agencies on their experiences with telematics in their fleets, such as studies or estimates of cost savings achieved, and share this information with agencies through GSA’s website or other methods. We provided a draft of this report to GSA for review and comment. GSA agreed with our findings and recommendations and said that it will take appropriate action to implement them. GSA reiterated its view that its centralized fleet management operations provide standardization, economies of scale, and the tools necessary for the effective and efficient management of the federal fleet. GSA’s comments are reprinted in appendix V. GSA also provided technical comments for our consideration. We incorporated these as appropriate. We are sending copies of this report to interested congressional committees and the Administrator of GSA. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or [email protected] Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. The General Services Administration (GSA) manages the vehicle leasing program (called GSA Fleet) and offers federal agencies a variety of vehicles for lease, including sedans, light and heavy-duty trucks, and specialty vehicles such as ambulances. In addition, GSA provides various services to its leasing customers, including: support of fleet service representatives, located in regional offices, throughout the vehicle leasing process, including the selection of vehicles, maintenance, and disposal; provision and management of fleet cards to purchase fuel and maintenance and repair services; tracking of fuel, maintenance, and repair expenses; and identification of fraud, waste, and abuse; access to GSA’s Fleet Drive-thru system, which contains automated information on agencies’ fleets, including mileage, inventory, fuel consumption, and agency incurred expenses (such as bills for damage to leased vehicles); analyses of fleet data that, among other things, may identify underutilized vehicles to be eliminated or shared through examination of mileage and usage data; management of accident-related needs and maintenance, including authorizing and tracking repairs, working with third parties and insurance company officials to collect payments (for accidents in which a third party is at fault) and an automated vehicle recall program with major manufacturers; and access to GSA Fleet Solutions, which provides additional services such as a short-term rental program and telematics. GSA Fleet provides support to its leasing customers through the following offices: Leasing Operations Division: responsible for monitoring vehicle expenses—including expenses associated with GSA’s accident and maintenance management services—and reviewing regional operations to identify opportunities to reduce costs and increase efficiency. Leasing Acquisition & Vehicle Remarketing Division: responsible for coordinating the leasing arrangement and delivery and subsequent resale of leased vehicles. Regional Offices: provide day-to-day support to local customers. Motor Vehicle Management Team: provides support to GSA’s motor vehicle department as a whole, including both GSA Fleet and GSA Automotive. For more information on GSA Automotive, see appendix II. o Systems Support Division: provides general information-systems support for the Office of Motor Vehicle Management, including information systems used to process and report fleet management data for the leased fleet. o Business Management Division: provides general analytical support for the Office of Motor Vehicle Management. Employees in these divisions—27 full-time equivalents (FTE)—are assigned to GSA’s central office located in Washington, D.C., while 82 call center employees reporting to the central office are responsible for handling accident, maintenance, and repair needs for GSA’s leased fleet from offices in four GSA regions. There are 384 regional FTEs, of which 334 are GSA’s fleet service representatives who serve as the primary point of contact for agency customers in GSA’s 11 regional offices. Additionally, 37 FTEs provide support to both GSA Fleet and GSA Automotive as part of the motor-vehicle management team. See figure 2 for information on the location and staffing levels of GSA’s central and regional offices. GSA works with third-party vehicle dealers, maintenance facilities, and auction houses to deliver, store, maintain, and sell leased vehicles. As such, GSA does not maintain any parking or maintenance facilities for its leased fleet. GSA is required by law to recover all costs incurred in providing vehicles and services to federal customers. Since leasing activities operate through the use of a revolving fund that is reconciled each year, GSA Fleet does not receive appropriations through the annual budget cycle. GSA purchases vehicles through the GSA Automotive program (discussed in app. II); these vehicles are ultimately used for GSA’s centralized leasing program. The funding used to purchase these vehicles comes from GSA’s revolving fund. GSA leases to federal customers and recovers these costs, as well as vehicle maintenance and administrative costs through lease fees and the resale of vehicles at the end of their life cycle. See table 2 for more detailed information on the revenues and expenses of GSA Fleet’s leasing program. From fiscal years 2008 through 2012, the difference between the program’s revenues and expenses was highest at about $70 million in fiscal year 2009 and lowest at about minus $47 million in fiscal year 2008. In fiscal year 2012, the highest expenses were those associated with vehicle depreciation, which accounted for about 42 percent of total expenses. The next largest expense in fiscal year 2012 was fuel for leased vehicles, which accounted for about 38 percent of total expenses. Overhead expenses accounted for about 4.9 percent of total expenses. GSA Automotive manages the vehicle purchasing program and offers an array of non-tactical vehicle products at a savings from the manufacturer’s invoice price, including alternative fuel vehicles, sedans, light trucks, ambulances, buses, and heavy trucks. In fiscal year 2012, federal agencies excluding the postal service owned about 245,000 vehicles, including 118,000 passenger vehicles and 122,000 trucks. The number of owned vehicles has increased from about 224,000 vehicles in fiscal year 2008 to about 245,000 vehicles in fiscal year 2012. GSA provides various services to customers who purchase vehicles, including: access to GSA’s online ordering tool, AutoChoice, which provides information and pricing on available vehicles; access to GSA’s Automotive Express Desk, which handles vehicle requirements on an “unusual and compelling urgency” basis; engineering and technical assistance for ordering non-standard customized vehicles including design, construction, and project management through delivery of the custom vehicle; and use of the Federal Fleet Management System, a web-based fleet management information system that identifies, collects, and analyzes vehicle data (including data on costs incurred for the operation, maintenance, acquisition, and disposal of agency owned vehicles), offered at no additional cost. GSA Automotive provides support to its purchasing customers through the following offices: Vehicle Purchasing Division: provides professional engineering, contracting, technical, and vehicle design services. Motor Vehicle Management Team: provides support to GSA’s motor vehicle department as a whole, including both GSA Fleet and GSA Automotive. For more information on the motor vehicle management team, see appendix I. Employees in these divisions work out of GSA’s central office located in Washington, D.C., and GSA Automotive currently employs approximately 18 FTEs. Purchased vehicles are delivered directly to a marshalling location by the manufacturer, where the customer picks up the vehicle. As such, GSA does not maintain parking or other facilities for vehicle storage at any point in the process. GSA is required by law to recover all costs incurred in providing vehicles and services to federal customers. Since GSA procurement activities operate through the use of a revolving fund that is reconciled each year, GSA Automotive does not receive appropriations through the annual budget cycle. GSA Automotive awards contracts for vehicles, provides information to agencies on pricing for evaluation, and places orders against the awarded contracts using their revolving fund. Using the previous year’s total purchases as a baseline, GSA contracts with auto manufacturers and other suppliers to procure vehicles for federal customers through “indefinite quantity, indefinite delivery” contracts. The costs associated with this acquisition process are recovered through a surcharge added to the vehicle price (which averaged about 1 percent of the price in fiscal year 2012). See table 3 for more detailed information on GSA’s purchasing program revenue and expenses. From fiscal years 2008 through 2012, the difference between the program’s revenue and expenses was highest at about $10.5 million in fiscal year 2009 and lowest at about $4.1 million in fiscal year 2011. The highest expenses were those associated with the cost of vehicles sold to federal agencies and GSA’s leasing program, which accounted for about 99 percent of total expenses in fiscal year 2012. Overhead expenses accounted for about 0.4 percent of total expenses in fiscal year 2012. The objectives of this report were to determine (1) whether and how GSA’s leasing rates, terms, and services support or encourage agency efforts to reduce fleet costs and (2) the views of selected experts regarding the cost savings potential of telematics for fleets and the possible implications for GSA’s leasing program. In addition, information on the services, structure, and costs associated with GSA’s vehicle leasing and purchasing programs is provided in appendixes I and II, respectively. To determine whether and how GSA’s vehicle-leasing rates, terms, and services support or encourage agency efforts to reduce fleet costs, we reviewed applicable federal laws; federal management regulations; GSA’s fleet guidance, policy, and strategic goals; and other pertinent GSA documentation; and interviewed GSA officials. We also convened two panels of federal fleet managers who managed federal fleets with over 7,000 vehicles leased from GSA in fiscal year 2011, the most recent data available at the time of our fleet manager selection, to obtain their views on this question. One panel consisted of one or more managers from five civilian agencies (the Departments of Agriculture, Energy, Homeland Security, the Interior, and Veterans Affairs) and the other consisted of one or more managers from five military agencies (the U.S. Marine Corps, the Army Corps of Engineers, and the Departments of the Air Force, Army, and Navy).views of all federal fleet managers, they do provide the perspective of managers of most of the federal leased fleet, as they manage over 80 percent of the vehicles leased from GSA in fiscal year 2011. To answer this research objective, we asked each panel an identical set of questions about the ways in which GSA’s rates, terms, and services encourage or support the reduction of fleet costs. We analyzed panel responses to our questions, and in reporting the responses, we focused on those related to reducing fleet costs to the government as a whole rather than to a specific agency. Views attributed to the panels reflect key messages or themes derived from these discussions, but we did not attempt to quantify exactly how many federal fleet managers agreed with each statement or issue While their views should not be used to generalize about the under discussion because this was not the goal of the panel and we did not poll or survey individuals. We then followed up with GSA to get its views on perspectives and suggestions provided by agency fleet managers. In assessing GSA’s efforts to reduce agency fleet costs through its rates, terms, and services, we reviewed GSA’s fleet policy, guidance, and strategic goals to determine the extent to which agency suggestions for improvement might be part of the current GSA vehicle- leasing framework. Additionally, we conducted interviews with officials from GSA’s vehicle-leasing program and Chief Financial Office and requested documentation of their expenditures, program policies, leasing rate structure, and agency-incurred leasing expenses, such as charges to agencies for damaged leased vehicles, in order to better understand how GSA’s vehicle-leasing program operates. We also analyzed GSA’s leasing rate structure in relation to the principle of economic efficiency, which has often been used to assess the design of government fees. In a May 2008 report, we noted that efficiency exists when the fee ensures that the government is providing the amount of the service that is economically desirable and that efficient fees increase awareness of the costs of government services, creating incentives to reduce costs where appropriate. To obtain information regarding telematics’ cost savings potential, we spoke with 19 experts—including consultants, representatives of associations, and fleet managers from corporations, government entities, and universities. Experts were selected based on their knowledge about fleet management or telematics. First, we reviewed the publications and conference history of associations and consultants that had recently participated in GAO work on fleet management or transportation technology to determine if they possessed expert knowledge of telematics. These entities included Accenture, Mercury Associates, the Intelligent Transportation Society of America, the National Association of Fleet Administrators, and the Automotive Fleet and Leasing Association. We determined that all five possessed expertise in this area. We then solicited nominations from these entities for individuals with expertise in fleet management and knowledge of telematics, and compared these nominations against publications and relevant literature. We did not select experts in cases where we, in consultation with our methodologist, believed the nomination may have been biased by a conflict of interest, such as a contract, between the nominating party and the nominee, which was likely to involve telematics. This process eliminated one nomination. We also reconsidered nominations when a company or organization, rather than a specific individual expert, was identified. We eliminated three nominations where we were uncertain that an individual expert could be reliably identified. After these eliminations, a total of 19 experts remained, whose views we obtained during group and individual interviews. Because of the interactive nature of the group interviews, we collected common themes rather than tabulating individual responses. The views represented are not generalizeable to those of all experts on fleet management or telematics; however, we were able to secure the participation of a diverse, highly qualified group of experts and believe their views provide a balanced and informed perspective on the topics discussed. We reviewed literature on the cost savings associated with telematics. We searched journals, research papers, and fleet management publications from 2007 through 2013. Of the19 experts we consulted, 15 were current fleet managers. While not all of the fleet managers used telematics, they had knowledge of the topic. We sent a questionnaire to these 15 fleet managers regarding the specific savings, if any, they had achieved through telematics as well as information such as the size of their fleets and the percentage of their fleet that uses telematics. We received 10 responses. Two corporate fleet managers indicated that this information was proprietary; however, this information was not material to our findings, conclusions, or recommendations. To understand the possible implications for GSA’s leasing program of the cost-savings potential of telematics, we also obtained the views of the federal fleet managers who participated in the two panels previously described. We inquired about their views on the cost-savings potential of telematics’ use by federal agencies and GSA’s efforts related to encouraging telematics use in leased vehicles. We also interviewed GSA officials, requested and reviewed the information that GSA provides to federal customers on telematics, reviewed GSA’s publicly available information on telematics’ offerings, examined GSA’s policies and guidance regarding telematics, and assessed GSA telematics efforts in relation to GAO’s internal control standards, which include relevant, reliable, and timely communications, and GSA’s 2014-2018 Strategic Plan. To identify the services provided by, and the structure and expenses associated with, GSA’s vehicle leasing and purchasing programs, we interviewed GSA officials and reviewed data and documents regarding GSA services, expenses, and revenue. Because this information was not material to the findings of this report, we have not assessed the reliability of GSA’s cost and revenue data. We conducted this performance audit from July 2013 to May 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix IV: Fleet Characteristics That Experts Reported Could Influence Telematics’ Cost-Saving Potential The most problematic fleets will have the greatest potential for cost savings, because they can substantially improve. For example, fleets that frequently idle may have more opportunities to save on fuel costs. Some telematic solutions involve fixed costs, such as the cost of a data download station. Such technology may not be cost effective for some smaller fleets. If a vehicle has low usage, ongoing telematics use may not produce a good return on investment. However, telematics may serve to determine whether the vehicle can be eliminated, which may produce cost savings. Telematics may have a higher return on investment in certain vehicle types, such as vehicles with poor fuel efficiency or specialized vehicles with higher operational costs. Vehicles that are turned over quickly may not be able to recover upfront capital expenses or ongoing costs during the time the vehicle is in service. Some telematics require cellular service or a satellite connection to acquire and transmit data. Rural areas may not have such services, and urban areas can sometimes suffer from “urban canyons,” in which tall buildings impair signals. Lack of reliable data can affect the soundness of cost-saving decisions. Some technology, such as a data download station, may not be viable for fleets without a central location to which all vehicles covered by the telematic program report. In such cases, the available technology choices will be more limited, a factor that may affect cost. In addition to the contact above, Judy Guilliams-Tapia (Assistant Director), Russell Burnett, Colin Fallon, Katherine Hamer, Kieran McCarthy, Josh Ormond, Alison Snyder, Jack Wang, and Crystal Wesco made key contributions to this report. | Agencies (excluding the U.S. Postal Service) spent about $1.1 billion in fiscal year 2012 to lease about 190,000 vehicles from GSA. Recent legislative proposals have called for reductions in the cost and size of federal agencies' fleets. Agencies may choose to have telematic devices installed in leased vehicles; the data these devices provide can be used to manage fleets. GAO was asked to review GSA's vehicle-leasing program. This report addresses (1) whether and how GSA's leasing rates, terms, and services support agency efforts to reduce fleet costs and (2) the views of selected experts regarding the cost-savings potential of telematics for fleets and the possible implications for GSA's leasing program. GAO reviewed program policies; interviewed GSA officials; held two panel discussions with fleet managers from 10 agencies representing 80 percent of the leased fleet in fiscal year 2011; and interviewed 19 experts with knowledge about telematics or fleet management, as demonstrated by recommendations from fleet management associations, among other considerations. Responses from the panelists and experts are not generalizable. Some aspects of the General Service Administration's (GSA) leasing rates, terms, and services support agency efforts to reduce fleet costs, while others do not. For example, GSA procures the vehicles it leases at a discount and passes those savings on to its customers, provides agencies with data analyses that can be used to eliminate unnecessary vehicles, and identifies fraud, waste, and abuse related to leased vehicles. However, GAO identified two areas where GSA's rates and terms have not encouraged agency efforts to reduce fleet costs. First, under GSA's leasing-rate structure, fuel costs are covered by a monthly fee based on miles traveled, among other things, but not on actual fuel used. This rate structure does not provide incentives for agencies to reduce some fuel costs that may not be fully reflected by miles travelled, such as costs associated with idling or speeding. Principles for designing government fees suggest that having each agency pay for the fuel it actually uses could increase incentives to reduce fuel costs. GAO has previously found that government fee decisions also involve considering trade-offs and that other considerations, such as administrative burden, are important. Without examining the trade-offs of changing GSA's rate structure so that agencies pay for the fuel they actually consume, GSA may be missing an opportunity to encourage agencies to minimize fuel costs and save taxpayer dollars. Second, lack of clear GSA guidance on what constitutes excessive wear and tear of leased vehicles can limit the ability of agencies to determine whether it is less expensive to lease or own vehicles. GSA just developed this guidance and is taking steps to implement it. The experts and federal fleet managers GAO consulted agreed that the use of telematics can facilitate cost savings for some fleets by providing fleet managers with information—such as data on vehicle location, speed, or condition—that they can use to reduce fleet size, fuel use, misuse of vehicles, and unnecessary maintenance. For example, a fleet manager at the Department of Energy's Idaho National Laboratory reported that since fiscal year 2011, telematics data have helped officials at that facility decide to eliminate 65 leased vehicles for an estimated annual savings of approximately $390,000. However, various factors—such as telematics' cost, characteristics of the fleet, and the level of management support—influence the potential of telematics to facilitate cost savings for a given fleet. The federal fleet managers on GAO's panels suggested that GSA lower the costs of telematic devices to improve the likelihood of achieving cost savings and to help allay management's concerns about return on investment. They also suggested that GSA provide information on agencies' experiences with telematics, such as studies or estimates of cost savings, to further support telematics' adoption in the federal fleet. GSA officials noted that they are currently engaged in efforts to obtain lower prices on telematic devices, and while officials do not currently collect information on agencies' experiences with telematics, they would be able to request it and share any information agencies voluntarily provide. One of GSA's strategic objectives is to enhance relationships with its customers, in part by sharing information that drives improved decision-making. By not collecting and sharing information on federal agencies' experiences with telematics, GSA may be missing an opportunity to help agencies determine whether to adopt telematics in their fleets and identify which devices or approaches have the greatest potential to facilitate cost savings. GAO recommends that GSA (1) examine the trade-offs of changing GSA's lease-rate structure so that agencies pay for their actual fuel use and (2) request information on agencies' experiences with telematics in their fleets and share this information with agencies. GSA agreed with GAO's findings and recommendations. |
USPS’s financial condition and outlook continue to deteriorate with a worsening outlook for mail volume and revenue. USPS currently projects a mail volume decline of 13.7 percent for fiscal year 2009, triple the 4.5 percent decline in fiscal year 2008 and the largest percentage decline since the Great Depression. As a result, USPS is projecting the following for fiscal year 2009: a net loss of about $7 billion, even if it achieves record cost savings of about $6 billion; an increase in outstanding debt by the annual statutory limit of $3 billion; and, despite this borrowing, an unprecedented $1 billion cash shortfall. USPS has reported that it does not expect to generate sufficient cash from operations to fully make its mandated payment of $5.4 billion for future retiree health benefits that is due by September 30, 2009. Further, USPS recently reported to Congress that—due to the need to maintain sufficient cash to cover costs—it will not fully make this payment, even if it receives $2 billion in relief from fiscal year 2009 retiree health benefits payments that would be provided by H.R. 22, which has been reported out of the House Committee on Oversight and Government Reform. USPS also expects continued financial problems in fiscal year 2010, with a similar deficit even if it achieves larger cost savings, and an even larger cash shortfall. Under this scenario, USPS would increase its outstanding debt by an additional $3 billion, which would bring its total debt to $13.2 billion at the end of fiscal year 2010—only $1.8 billion less than its $15 billion statutory limit. USPS’s projected cost cutting of about $6 billion for this fiscal year is much larger than its previous annual cost-cutting targets that have ranged from nearly $900 million to $2 billion since 2001. However, USPS projects cash shortfalls because cost cutting and rate increases will not fully offset the impact of mail volume declines and other factors that increase costs— notably semiannual cost-of-living allowances (COLA) for employees covered by union contracts. Compensation and benefits constitute close to 80 percent of its costs—a percentage that has remained similar over the years despite major advances in technology and automating postal operations. Also, USPS continues to pay a higher share of employee health benefit premiums than other federal agencies. Further, it has high overhead (institutional) costs that are hard to change in the short term, such as providing universal service that includes 6-day delivery and maintaining a network of 37,000 post offices and retail facilities, as well as a delivery network of more than 149 million addresses. Two days ago, we added USPS’s financial condition to the list of high-risk areas needing attention by Congress and the executive branch to achieve broad-based transformation. We reported that USPS urgently needs to restructure to address its current and long-term financial viability. USPS’s cost structure has not been cut fast enough to offset accelerated decline in mail volume and revenue. In this regard, USPS has high personnel costs, including those to provide 6-day delivery and retail services. To achieve financial viability, USPS must align its costs with revenues, generate sufficient earnings to finance capital investment, and manage its debt. We noted that mail use has been changing over the past decade as businesses and consumers have moved to electronic communication and payment alternatives. Further innovations in, and use of, e-commerce and broadband are expected. The percentage of households paying bills by mail is declining while the percentage of electronic payments is increasing (see fig. 1). Mail volume peaked in 2006, and its decline has accelerated with the economic recession, particularly among major mail users in the advertising, financial, and housing sectors. Mail volume has typically returned after recessions, but USPS’s 5-year forecast suggests that much of the lost volume will not return. For these reasons, we concluded that action is needed in multiple areas, including possible action and support by Congress, as no single change will be sufficient to address USPS’s challenges. The short-term challenge for USPS is to cut costs quickly enough to offset the unprecedented volume and revenue declines, so that it can cover its operating expenses. The long-term challenge is to restructure USPS operations, networks, and workforce to reflect changes in mail volume, use of the mail, and revenue. Accordingly, we have called for USPS to develop and implement a broad restructuring plan—with input from the Postal Regulatory Commission (PRC) and other stakeholders, and approval by Congress and the administration—that includes key milestones and time frames for actions, addresses key issues, and identifies what steps Congress and other stakeholders may need to take. We stated that USPS’s restructuring plan should address how it plans to realign postal services, such as delivery frequency, delivery standards, and access to retail services, with changes in the use of mail by consumers and businesses; better align costs and revenues, including compensation and benefit costs; optimize its operations, networks, and workforce; increase mail volumes and revenues; and retain earnings, so that it can finance needed capital investments and repay its growing debt. USPS needs to optimize its retail, mail processing, and delivery networks to eliminate growing excess capacity and maintenance backlogs, reduce costs, and improve efficiency. We recently reported that USPS needs to rightsize its retail and mail processing networks and reduce the size of its workforce. USPS has a window of opportunity to further reduce the cost and size of its workforce through attrition and the large number of upcoming retirements to minimize the need for layoffs. As the Postmaster General testified this March, about 160,000 USPS employees are eligible for regular retirement this fiscal year, and this number will grow within the next 4 years to nearly 300,000. USPS has begun efforts to realign and consolidate some of its mail processing, retail, and delivery operations, but much more restructuring is urgently needed. We recognize that USPS would face formidable resistance to restructuring with many facility closures and consolidations because of concerns that these actions would impact service, employees, and local communities. USPS senior management will need to provide leadership and work with stakeholders to overcome resistance for its actions to be successfully implemented. USPS must use an open and transparent process that is fairly and consistently applied; engage with its unions, management associations, the mailing industry, and political leaders; and demonstrate results of actions. In turn, these stakeholders and Congress need to recognize that major changes are urgently needed for USPS to be financially viable. To its credit, USPS recently began a national initiative to consolidate some of its 3,200 postal retail stations and branches in urban and suburban areas. It has nearly completed an initial review to identify which facilities will be studied for consolidation, and expects the studies to take about 4 months, with final decisions made starting this October. USPS has processes for notifying its unions and management associations, soliciting community input, and notifying affected employees as it winnows the list of stations and branches it is considering for consolidation (see fig. 2). On July 2, 2009, USPS requested that PRC provide an advisory opinion on USPS’s retail consolidation initiative, which has led to a public process that will provide stakeholders with opportunities for input. In its request, USPS stated it would identify opportunities to consolidate retail operations and improve efficiency, but only after concluding that such changes will continue to provide ready access to essential postal services. USPS noted that the branches and stations considered for consolidation are often in close proximity to each other. USPS stated that it could not estimate the savings because it had not made decisions on how many or which facilities would be closed. Going forward, issues may include whether stations and branches will be considered subject to statutory requirements for maintaining and closing post offices, and the similar question of whether any branches and stations are covered by the long- standing appropriations provision that restricts post office closures. USPS is required, among other things, to provide adequate, prompt, reliable, and efficient services to all communities, including a maximum degree of effective and regular services in rural areas, communities, and small towns where post offices are not self-sustaining. USPS is specifically prohibited from closing small post offices solely for operating at a deficit. Consistent with reasonable economies, USPS is authorized to establish and maintain facilities as are necessary to provide ready access to essential services to customers throughout the nation. Before closing a post office, USPS must, among other things, provide customers with at least 60 days of notice before the proposed closure date, and any person served by the post office may appeal its closure to the PRC. However, USPS plans state that customers will have 20 days to comment on a proposed closure of a station or branch and that no appeals will be permitted. USPS explained that stations and branches are different from post offices. A recent Congressional Research Service report discussed this matter and other issues related to the closure of these retail facilities. To put USPS’s retail consolidation initiative into context, we recently testified before this subcommittee that USPS can streamline its network of 37,000 post offices, branches, and stations—a network that has remained largely static despite expanding use of retail alternatives and shifts in population. We have previously reported that the number of postal retail facilities has varied widely among comparable counties in urban areas, and a number of facilities we visited appeared to merit consideration for closure based on leading federal practices for rightsizing facility networks. Our report also noted that USPS has a maintenance backlog for its retail facilities, and USPS officials stated that USPS has historically underfunded its maintenance needs. USPS has limited its capital expenditures to help conserve cash, which may affect its maintenance backlog. Fewer retail facilities would reduce maintenance needs. USPS has begun efforts to consolidate some mail processing operations, but much more needs to be done to restructure this network, particularly since USPS has closed only 1 of its approximately 400 major mail processing facilities. In the Postal Accountability and Enhancement Act of 2006, Congress encouraged USPS to expeditiously move forward in its streamlining efforts, recognizing that the 400 processing facilities are more than USPS needs and streamlining this network can help eliminate excess costs. USPS has substantial excess capacity in its processing network that is growing with declining mail volume. According to USPS, it has 50 percent excess capacity for processing First-Class Mail. USPS is using the Area Mail Processing process to propose consolidating some mail processing operations (see app. I and http://www.usps.com/all/amp.htm). USPS is also consolidating processing and transportation operations from Bulk Mail Centers and Surface Transfer Centers into what it refers to as Network Distribution Centers, which USPS officials expect to be completed this November (see http://www.usps.com/all/ndc.htm). In the past decade, USPS has closed some smaller facilities, such as 68 Airport Mail Centers and 50 Remote Encoding Centers. In 2005, we recommended that USPS enhance transparency and strengthen accountability of its realignment efforts to assure stakeholders that such efforts would be implemented fairly and achieve the desired results. We since testified that USPS took steps to address these recommendations and should be positioned for action. USPS has ongoing efforts to increase the efficiency of mail delivery, which is USPS’s largest cost segment and includes more than 350,000 carriers that account for approximately 45 percent of salary and benefit expenses. Two key efforts are (1) realignment of city delivery routes and (2) installing new Flats Sequencing Systems to automate the sorting of flat- sized mail—such as catalogs and magazines—into delivery order, so that time-consuming and costly manual sorting by carriers is no longer needed. First, USPS is realigning city carrier routes to remove excess capacity and improve efficiency, which is expected to generate nearly $1 billion in annual savings. USPS also expects this effort to result in reduced facility space needs, increased employee satisfaction, and more consistent delivery service. Route realignment has been made possible by collaboration between USPS and the National Association of Letter Carriers. The parties agreed on the original realignment process, which resulted in eliminating 2,500 routes. A modified process, which will cover all city delivery routes, has resulted in the elimination of an additional 1,800 routes through June 2009 (see fig. 3), and additional routes may be eliminated. Thus, route realignment should result in further savings next fiscal year. USPS has established policies and procedures to notify customers if they will be affected by route realignment and taken actions to keep affected stakeholders informed. For example, USPS has made updated route information available on the Internet, which the mailing industry needs to prepare and organize the mail so USPS can efficiently handle it. Second, USPS has begun to install 100 automated sorting machines for its $1.5 billion Flats Sequencing System to sort flat-sized mail into delivery order, which is scheduled to be completed in October 2010. USPS expects this to improve delivery accuracy, consistency, and timeliness. USPS has worked with the mailing industry to facilitate implementation, since the industry plays a major role in preparing, transporting, and addressing flat- sized mail for efficient USPS handling. Mailer representatives have praised USPS communications and coordination with them—a process that is continuing to address implementation issues. USPS and the two carrier unions (the National Association of Letter Carriers and the National Rural Letter Carriers’ Association) reached agreement on revised work rules and procedures to realign routes and capture work hour savings. Because of mail volume declines, to maximize program savings, USPS is reconsidering where to deploy the machines and the number of delivery routes covered by the program. On routes covered by the machines, city carriers, on average, will be manually sorting nearly 500 fewer flat-sized mail pieces each day. Finally, USPS has proposed moving to 5-day delivery to help address its financial problems. USPS is studying how 5-day delivery could be implemented, potential savings, and impacts on its employees. The study, which USPS expects to complete this fall, will incorporate input from postal unions and management associations, the mailing industry, and consumer and market research. Cutting delivery frequency would affect universal postal service and could further accelerate the decline in mail volume and revenues. Considering the potential impact on cost, volume, revenues, employees, and customers, it will be important for USPS to make its study publicly available so that Congress and stakeholders can better understand USPS’s proposal and consider the trade-offs involved. As USPS has recognized, implementing 5-day delivery would require congressional action because a long-standing appropriations provision mandates 6-day delivery. PRC officials have stated that USPS would be required to seek an advisory opinion from PRC on such a change, which would lead to a public hearing with stakeholder input. According to USPS officials, USPS would need about 6 months to prepare for and implement 5-day delivery, including moving employees to other locations, reprogramming payroll systems, and realigning operations. Mr. Chairman, this concludes my prepared statement. I would be pleased to answer any questions that you or other Members of the Subcommittee may have. For further information regarding this statement, please contact Phillip Herr at (202) 512-2834 or [email protected]. Individuals who made key contributions to this statement include Shirley Abel, Teresa Anderson, Gerald P. Barnes, Josh Bartzen, Paul Hobart, Kenneth E. John, David Hooper, Hannah Laufe, Emily Larson, Josh Ormond, Susan Ragland, Amy Rosewarne, Travis Thomson, and Crystal Wesco. Area Mail Processing (AMP) study initiated 34. Western Nassau, NY, to Mid-Island, NY 35. Wilkes Barre, PA, to Scranton, PA, and Lehigh Valley, PA nnonced on Jne 6, 2009, tht it hd hlted the Indstry, Cliforni, stdy because it determined there were no significnt opportnities to improve efficiency or service t tht time. nnonced on My 5, 2009, tht it hd hlted the Plttsbrgh, New York, stdy because of nresolved service isses. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The U.S. Postal Service's (USPS) financial condition has worsened this year, with the recession and changing mail use causing declines in mail volume and revenues despite postal rate increases. GAO testified in May to this subcommittee that USPS expects these declines to lead to a record net loss and an unprecedented cash shortfall even if ambitious cost cutting is achieved. GAO reported that maintaining USPS's financial viability as the provider of affordable, high-quality universal postal service will require actions in a number of areas, such as (1) rightsizing its retail and mail processing networks by consolidating operations and closing unnecessary facilities and (2) reducing the cost and size of its workforce, which generates about 80 percent of its costs. Today GAO is releasing its report on USPS efforts to improve the efficiency of delivery. Delivery accounts for nearly half of USPS salary and benefit costs. This testimony (1) updates USPS's financial condition and outlook and explains GAO's decision to place USPS's financial condition on the High-Risk List and (2) discusses the need for USPS to restructure its mail processing, retail, and delivery networks and its efforts to improve their efficiency. It is based on GAO's past and ongoing work and updated USPS information. USPS's financial condition and outlook continue to deteriorate with a worsening outlook for mail volume and revenue. USPS now projects mail volume to decline by about 28 billion pieces to about 175 billion pieces in fiscal year 2009, a decline of 13.7 percent. As a result, USPS projects (1) a net loss of about $7 billion even with record savings of about $6 billion; (2) an increase in outstanding debt by the annual $3 billion limit; and, (3) despite this borrowing, an unprecedented $1 billion cash shortfall. Thus, USPS recently reported to Congress that, due to the need to maintain sufficient cash to cover costs, it will not fully make its mandated payment of $5.4 billion for future retiree health benefits due by September 30, 2009, even if it receives $2 billion in relief under pending House legislation. GAO added USPS's financial condition to the High-Risk List this week. GAO reported USPS urgently needs to restructure to address its current and long-term financial viability. Accordingly, GAO calls for USPS to develop and implement a broad restructuring plan--with input from the Postal Regulatory Commission and other stakeholders, and approval by Congress and the administration--that includes key milestones and time frames for actions, addresses key issues, and identifies what steps Congress and other stakeholders may need to take. USPS needs to optimize its retail, mail processing, and delivery networks to eliminate growing excess capacity and maintenance backlogs, reduce costs, and improve efficiency. USPS has a window of opportunity to reduce the cost and size of its workforce through attrition and the large number of upcoming retirements to minimize the need for layoffs. Although USPS has begun efforts to realign and consolidate some mail processing, retail, and delivery operations, much more is urgently needed. GAO recognizes that USPS would face formidable resistance to restructuring with many facility closures and consolidations because of concerns that these actions would affect service, employees, and communities. USPS management will need to provide leadership and work with stakeholders to overcome resistance for its actions to be successfully implemented. USPS must use an open, transparent, fair, and consistent process; engage with its unions, management associations, the mailing industry, and political leaders; and demonstrate results. In turn, these stakeholders and Congress need to recognize that major restructuring is urgently needed for USPS to be financially viable. To its credit, USPS recently began a national initiative to consolidate some of its 3,200 postal retail stations and branches in urban and suburban areas. USPS has begun efforts to consolidate some mail processing operations but has closed only 1 of 400 major mail processing facilities. USPS is realigning city carrier routes to remove excess capacity and improve efficiency, which is expected to save nearly $1 billion annually; has begun to install automated equipment to reduce costly manual sorting of flat-sized mail; and is studying how it could shift to 5-day delivery and the potential savings. |
Each year, millions of visitors, foreign students, and immigrants come to the United States. A visitor may enter on a legal temporary basis—that is, with an authorized period of admission that expires on a specific date— either with a temporary visa (generally for tourism, business, or work) that the Department of State issues or, in some cases, as a tourist or business visitor who is allowed to enter without a visa. The latter category includes Canadians and qualified visitors from 27 countries who enter under the Visa Waiver Permanent program. A large majority of these visitors depart on time, but others overstay. The term “overstay” is defined as follows: An overstay is an illegal alien who was legally admitted to the United States for a specific authorized period but remained here after that period expired, without obtaining an extension or a change of status or meeting other specific conditions. Overstays who settle here are part of the illegal immigrant population. Although overstays are sometimes referred to as visa overstays, we do not use that term in this report for two reasons. First, many visitors are allowed to enter the United States without visas and to remain for specific periods of time, which they may overstay. Second, a visitor can overstay an authorized period of admission set by a U.S. Department of Homeland Security (DHS) inspector at the border—even though that authorized period may be shorter than the period of the visitor’s visa. (For example, a visitor with a 6-month multiple-entry visa from the Department of State might be issued a 6-week period of admission by the DHS inspector and remain here for 7 weeks, thus overstaying.) Viewed in terms of individuals, the overstay process can be summarized as aliens’ (1) legally visiting the United States, which for citizens of most nations is preceded by obtaining a passport and a visa and filling out Form I-94 at the U.S. border; (2) overstaying for a period that may range from a single day to weeks, months, or years; and, in some cases, (3) terminating their overstay status by exiting the United States or adjusting to legal permanent resident status (that is, obtaining a green card). Most long- term overstays appear to have economic motivations. However, the overstay process can also be viewed in the context of a layered defense for domestic security, supported by agencies such as DHS, the U.S. Department of Justice (DOJ), and the Department of State, among others. Figure 1 illustrates the layered-defense concept and the many interrelated issues that we have analyzed in numerous reports—ranging from the overseas tracking of terrorists to stateside security for critical infrastructure locations. Intelligence, investigation, and information sharing are the key ingredients supporting such a defense. A variety of immigration issues are potentially relevant. Progress in deploying and effectively using watch lists is ongoing. In 2003, we reported that (1) the State Department, “with the help of other agencies, almost doubled the number of names and the amount of information” in its Consular Lookout and Support System but that (2) “the federal watch list environment has been characterized by a proliferation of systems, among which information sharing is occurring in some cases but not in others.” Visitor biographical and biometric data are now being checked against selected watch list data, to verify visitors’ identity, as part of the new U.S. Visitor and Immigrant Status Indicator Technology program (US-VISIT). Keeping all dangerous persons and potential terrorist suspects from legally entering the United States is difficult because some do not match the expected characteristics of terrorists or suspicious persons. In addition, some—such as citizens of Canada or one of the 27 visa waiver countries— are not required to apply for visas and are not screened by the visa process. Terrorists may continue to slip through border defenses, and watch lists have therefore also been used for tracking foreign terrorists within the United States. Overstay tracking—that is, recording visitors’ entries and exits as well as their address information—also logically plays a role. Overstay issues have gained heightened attention because some of the hijackers of September 11, 2001, had overstayed their periods of admission. Form I-94 (shown in appendix I) is the basis of DHS’s long-standing system for tracking overstays. For visitors from most countries, the period of admission is authorized (or set) by a DHS inspector when they enter the United States legally and fill out this form. Each visitor is to give the top half of the form to the inspector and to retain the bottom half, which should be collected when the visitor departs the country. However, two major groups are exempt from filling out Form I-94 when they visit the United States for business or pleasure: Canadian citizens admitted for up to 6 months and Mexican citizens entering the United States with a border crossing card (BCC, illustrated in fig. 2) at the southwestern border who intend to limit their stay to less than 72 hours and intend not to travel beyond a set perimeter, generally 25 miles from the border (see app. II, fig. 6). During fiscal years 1999 to 2003, the Department of State issued 6.4 million Mexican BCCs. Because the majority of Canadian and Mexican BCC visits do not require Form I-94, the system based on this form cannot follow them—that is, cannot track them. No data indicate how many overstay. Overstay tracking should be possible for almost all other legal temporary visitors, including visitors from visa waiver countries, because they are required to fill out the form. describe available data on the extent to which overstaying occurs, identify any weaknesses that might limit the utility of DHS’s long-standing overstay tracking system, and provide some observations about the potential effect of overstays—as well as limitations of the overstay tracking system—on domestic security. In examining these issues, our main information sources included (1) relevant GAO and other reports, (2) interviews we conducted with officials and staff at DHS and DOJ, and (3) a variety of data, including printouts from DHS’s long-standing overstay tracking system (based on Form I-94), data that DHS developed, at our request, from Operation Tarmac (the sweep that identified overstays and other illegal immigrants working at U.S. airports) and other similar operations, and facts about the arrivals, departures, and overstay status of the September 11 hijackers and others involved in terrorist-related activities. We assessed the reliability of these data sources by reviewing existing information about the data, interviewing agency officials knowledgeable about the data and the process by which they were collected, and reviewing the data for reasonableness and corroboration with other independent data sources. While we found and reported on weaknesses in the data, we determined that the data were sufficiently reliable for the purposes of this report. Our scope did not include (1) aspects of illegal immigration or domestic security unrelated to overstaying or (2) elements of overstay enforcement additional to a system for tracking legal visitors’ entries and exits (for example, resource allocation). Our work was conducted in accordance with generally accepted government auditing standards between January 2003 and May 2004, primarily at DHS and DOJ headquarters in Washington, D.C. One visit to the southwest border was made to observe departure procedures. Significant numbers of visitors overstay their authorized periods of admission. A January 2003 DHS estimate put the January 2000 resident overstay population at one-third of 7 million illegal immigrants, or 2.3 million. While the method DHS used to obtain this figure is complex, indirect, and marked by potential weaknesses, we identified three small-sample alternative data points that, taken together, provide some evidence that, in all likelihood, a substantial proportion of illegal immigrants are overstays. These three alternative data sources on illegal immigrants indicate varying—but uniformly substantial—percentages of overstays: 31 percent, 27 percent, and 57 percent. At the same time, we found that DHS’s estimate excludes some overstay groups and may thus understate the extent of the total overstay problem. The main overstay groups omitted from the DHS overstay estimate of 2.3 million are long-term Mexican and Canadian overstays who were not required to fill out Form I-94 at entry and short-term overstays, whether from Mexico, Canada, or other countries. Short-term overstays cannot be ignored because, as we explain in a later section, some terrorists or terrorist supporters are in this group. DHS’s overstay estimate for January 2000 (that is, that overstays represent one-third of the illegal immigrant population, or 2.3 million residents) was based, in part, on a projection forward of overstay rates for 1992. Earlier, we identified challenges and potential weaknesses in Immigration and Naturalization Service (INS) procedures used in estimating these overstays (including an incorrect INS formula). Therefore, we sought alternative, and more current, data sources. The first alternative data source we identified is a survey that DHS and the National Institute of Child Health and Human Development sponsored, in partnership with other federal agencies. As reported in 2002, the survey (1) sampled more than 1,000 adult green-card holders, (2) asked them about their prior immigration status, and (3) found that more than 300 self- reported earlier illegal status. The computer run we requested showed that 31 percent of these former illegals said they had been overstays. (Most others reported prior illegal border crossing.) A second alternative source was a set of data we obtained from Operation Tarmac and other recent sweeps of employees who, in the course of their work, had access to sensitive areas in airports, other critical infrastructures, or special events (for example, the Super Bowl). Although investigators conducting these operations collected information on overstaying, they had not systematically recorded data for overstays versus illegal border crossers or other categories of illegal immigrants. We requested that DHS manually review case files for those arrested and identify the number who were overstays. DHS reported to us on a total of 917 arrests, taken from operations at a sample of Operation Tarmac airports and at all other critical infrastructure and special-event locations investigated. As we detail later in this report, 246 of the 917 cases—or 27 percent—were categorized as overstays. Another source we obtained from DHS was similar information on an operation identifying illegal alien employees at a retail chain (unrelated to terrorist concerns). In this operation, 138 of 243 cases—that is, 57 percent—were identified as overstays. The percentages above do not represent the illegal population but, as indicated above, do provide some evidence that in all likelihood, a substantial proportion of illegal immigrants are overstays. The DHS overstay estimate appears to be an understatement for two main reasons. The first is that it is based on data from I-94 forms—and many Mexican and Canadian visitors are not required to complete this form. The second is that, by definition, the population of illegal immigrants does not include many short-term overstays. Thus, DHS’s 2.3 million estimate excludes the following overstay groups: 1. Mexican and Canadian visitors who did not fill out Form I-94 and who overstayed and settled here. (Although these long-term overstay settlers are included in DHS’s estimate of 7 million illegal immigrants, they are, erroneously, categorized as illegal immigrants other than overstays. This is because DHS used I-94 data to estimate overstays.) 2. Visitors filling out Form I-94 who overstay for short periods of time. 3. Mexicans and Canadian visitors who do not fill out Form I-94 and who overstay for short periods of time. The excluded groups are illustrated in figure 3, together with the overstay group that is covered. As a result, an overstay settler group is omitted from DHS’s overstay estimate (that is, from DHS’s estimate that one-third of the illegal immigrant population, or 2.3 million, are overstays). The Mexican– Canadian overstay group at issue was apparently included in the 7 million—but not the 2.3 million—estimate. It is not clear whether this issue may affect some of the three “rough-check” comparison figures cited above. DHS’s procedures for arriving at the estimate of 7 million are heavily based on the 2000 census and include those who settled here, were residing here at the time of the 2000 census, and were included either in the actual census count or in corrections for possible undercounts. The census is not likely to include aliens illegally present for relatively short periods of time, in part because such persons may not identify the United States as their principal place of residence. Consistent with this, when using I-94 data to estimate overstays, DHS specifically excluded short-term overstays. This is important because overstaying is not limited to those who illegally immigrate here and intend to remain for years. Many others overstay for only a few days, weeks, or months, including those discussed above who are—and are not—required to fill out Form I-94. Finally, we note two possible further limitations: DHS overstay estimates do not address either the issue of “prior overstays” or possible trends in overstaying: As indicated by the survey cited above (our first alternative data source), a portion of overstays who settle here eventually obtain legal status. Many prior overstays appear to be residing legally in the United States now, and thus the “flow” of overstays who settle here may be larger than a net estimate of the overstay population at a single point in time implies. DHS estimated overstays using I-94 data from the early 1990s, and it projected those estimates forward to January 2000. Without independent overstay estimates for two points in time, a reliable assessment of change is not possible. DHS has not published estimates for more recent dates— that is, since January 2000. Overstay trend estimates would be of interest but are not available. Visits to the United States decreased somewhat in 2002. Data from the Bureau of the Census indicate that the overall trend of the foreign-born population (of whom the majority are legal) was to steadily increase in size from 1990 through 2003. (See fig. 4.) We recognize that an overstay tracking system is only one ingredient in effective overstay control and enforcement. However, we believe it is a crucial ingredient. Without an adequate overstay tracking system, an accurate list of overstays cannot be generated for control purposes. In earlier reports, we identified a variety of weaknesses in the I-94 overstay tracking system. DHS has begun to phase in US-VISIT—a new program for collecting, maintaining, and sharing information on foreign nationals. We discussed above one weakness in DHS’s Form I-94 overstay tracking system—its limited coverage of Mexican and Canadian visitors. In our previous work, we have pointed to at least three other weaknesses: Failure to update the visitor’s authorized period of admission or immigration status. Last year, we reported that DHS does not “consistently enter change of status data . . . integrate these data with those for entry and departure.” DHS told us that linkage to obtain updated information may occur for an individual, as when a consular official updates information on an earlier period of admission for someone seeking a new visa, but DHS acknowledged that linkage cannot be achieved broadly to yield an accurate list of visitors who overstayed. Lack of reliable address information and inability to locate visitors. Some visitors do not fill in destination address information on Form I-94 or they do so inadequately. A related issue that we reported in 2002 is DHS’s inability to obtain updated address information during each visitor’s stay. Such information could be a valuable addition to the arrival, departure, and destination address information that is collected. Missing departure forms. We reported in 1995 that “airlines are responsible for collecting . . . departure forms when visitors leave . . . . But for some visitors who may have actually left the United States record of the departures.” DHS acknowledges that this is still a concern, the situation is analogous for cruise lines, and noncollection is a larger problem for land exits. Our recent work has also drawn attention to identity fraud, demonstrating how persons presenting fraudulent documents (bearing a name other than their own) to DHS inspectors could enter the United States. Visitors whose fraudulent documents pass inspection could record a name other than their own on Form I-94. In our current work, we have identified two further weaknesses in the overstay tracking system. One weakness is the inability to match some departure forms back to corresponding arrival forms. DHS has suggested that when a visitor loses the original departure form, matching is less certain because it can no longer be based on identical numbers printed on the top and bottom halves of the original form. The other weakness is that at land ports (and possibly airports and seaports), the collection of departure forms is vulnerable to manipulation—in other words, visitors could make it appear that they had left when they had not. To illustrate, on bridges where toll collectors accept Form I-94 at the southwestern border, a person departing the United States by land could hand in someone else’s form. Because of these weaknesses, DHS has no accurate list of overstays to send to consular officials or DHS inspectors. This limits DHS’s ability to consider past overstaying when issuing new visas or allowing visitors to reenter. More generally, the lack of an accurate list limits prevention and enforcement. For example, accurate data on overstays and other visitors might help define patterns for better differentiating visa applicants with higher overstay risk. And without an accurate list and updated addresses, it is not possible to identify and locate new overstays to remind them of penalties for not departing. Such efforts fall under the category of interior enforcement. As we previously reported, “historically . . . over five times more resources in terms of staff and budget border enforcement than . . . interior enforcement.” Despite large numbers of overstays, current efforts to deport them are generally limited to (1) criminals and smugglers, (2) employees identified as illegal workers at airports and other critical infrastructure locations, and (3) persons included in special control efforts such as the 2003 domestic registration (or “call in” component) of the NSEERS program (the National Security Entry and Exit Registration System). DHS statisticians told us that for fiscal year 2002, the risk of arrest for all overstays was less than 2 percent. For most other overstays (that is, for persons not in targeted groups), the risk of deportation is considerably lower. DHS told us that because of limited resources, it has focused enforcement on high-priority illegal alien groups. The effect that weaknesses in the overstay tracking system has on overstay data is illustrated by the inaccurate—and, according to DHS, inflated—lists of what it has termed “apparent overstays.” For fiscal year 2001 arrivals, the system yielded a list of 6.5 million “apparent overstays” for which DHS had no departure record that matched the arrivals and an additional list of a half million other visits that ended after the visitors’ initial periods of admission expired. (For data on specific countries or country groups, see app. V, table 5.) However, DHS has no way of knowing which of these are real cases of overstaying and which are false, because in all likelihood, some of these visitors departed or legally changed their status—or legally extended their periods of admission. In the past, we made a number of recommendations that directly or indirectly addressed some of these system weaknesses, but these recommendations have not been implemented or have been only partially implemented. (Of these, four key recommendations are reproduced in app. VI.) Two recent DHS programs are aimed at remedying some of the weaknesses we have discussed. First, as part of NSEERS, an effort is being made to register certain visitors at points of entry (POE) to the United States and to have government inspectors register departures. But that POE effort does not cover most visitors and does not involve inspectors’ actually observing departures. Second, US-VISIT is DHS’s new program for collecting, maintaining, and sharing information on foreign nationals who enter the United States. Among other things, the first phase of US-VISIT is designed to collect electronic entry-exit passenger and crew manifest data and to match entry and exit data to each other (based on passengers’ biographic information) and to other information, thus identifying overstays, and use biometrics to verify foreign visitors’ identities, upon entry, at 115 airports and 14 seaports of entry. We have reported elsewhere that this first phase is operational but that improvements are needed. Three additional phases are planned that would extend US-VISIT’s identity-verification capabilities—initially, to high- traffic land borders and, eventually, to all remaining ports of entry—as well as adding capabilities, such as that of processing machine-readable documents that use biometric identifiers. DHS told us that a current goal is to incorporate NSEERS POE into the US-VISIT program. Successfully designing—and implementing—US-VISIT involves a number of challenges. For example, DHS concurred with recommendations in our 2003 report, including, among other things, that DHS develop key acquisition management controls. As we have reported elsewhere, US-VISIT has not yet developed a strategy for defining and implementing these controls or a time period for doing so. Other crucial issues are whether US-VISIT can avoid weaknesses associated with the Form I-94 system. Some challenges—such as implementing an appropriate system at land borders, obtaining accurate addresses, verifying the identity of all entering visitors, and otherwise insuring the integrity of the inspections process—may be very difficult to overcome. While the design and implementation of US-VISIT face a number of challenges, we believe that it might be useful to determine whether the new program successfully avoids specific weaknesses associated with the long-standing I-94 system. Together with other efforts, this might help identify some difficult challenges in advance and enhance US-VISIT’s chances for eventual success as an overstay tracking system. Weaknesses in overstay tracking may encourage visitors and potential terrorists who legally enter the United States to overstay. Once here, terrorists may overstay or use other stratagems to extend their stay—such as exiting and reentering (to obtain a new authorized period of admission) or applying for a change of status. As shown in table 1, of the six hijackers who actually flew the planes on September 11 or were apparent leaders, three were out of status on or before September 11—two because of prior short-term overstaying. Additionally, a number of current or prior overstays were arrested after September 11 on charges related to terrorism. For example: Two overstays pled guilty to separate instances of identity document fraud and were connected to different hijackers in the September 11 group. They were current, short-term overstays when the identity document fraud occurred. Four others with a history of overstaying (and variously connected to the September 11 hijackers, the Taliban, and Hezbollah terrorists) pled guilty to document fraud or weapons charges or were convicted of money laundering. One of these was also convicted of providing Hezbollah material support, including night vision devices and other weapons-related technology. Last, the gunman who fired on several people at the El Al ticket counter of Los Angeles International Airport was identified (by DHS) as a prior overstay. Terrorists who enter as legal visitors are hidden within the much larger populations of all legal visitors, overstays, and other illegals such as border crossers. Improved overstay tracking could help counterterrorism investigators and prosecutors locate suspicious individuals placed on watch lists after they entered the country. The director of the Foreign Terrorist Tracking Task Force told us that he considered overstay tracking data helpful. For example, these data—together with additional analysis— can be important in quickly and efficiently determining whether suspected terrorists were in the United States at specific times. As we reported in 2003, between “September 11 and November 9, 2001 , . . . INS compiled a list of aliens whose characteristics were similar to those of the hijackers” in types of visas, countries issuing their passports, and dates of entry into the United States. While the list of aliens was part of an effort to identify and locate specific persons for interviews, it contained duplicate names and data entry errors. In other words, poor data hampered the government’s efforts to obtain information in the wake of a national emergency, and it was necessary to turn to private sector information. Reporting earlier that INS data “could not be fully relied on to locate many aliens who were of interest to the United States,” we had indicated that the Form I-94 system is relevant, stressing the need for improved change-of-address notification requirements. INS generally concurred with our recommendations. DHS has declared that combating fraudulent employment at critical infrastructures, such as airports, is a priority for domestic security. DHS has ongoing efforts to identify illegal workers in jobs at various infrastructures (for example, airport workers with security badges). These sweeps are thought to reduce the nation’s vulnerability to terrorism, because, as experts have told us, (1) security badges issued on the basis of fraudulent IDs constitute security breaches, and (2) overstays and other illegal aliens working in such facilities might be hesitant to report suspicious activities for fear of drawing authorities’ attention to themselves or they might be vulnerable to compromise. Operation Tarmac is a national multiagency initiative focused on screening employees working in secure areas of U.S. airports. Post–September 11 investigations of passenger-screening companies and other secure-area employers revealed substantial numbers of unauthorized foreign national employees. As a result, further sweeps began in 2001 with Washington, D.C., and Salt Lake City (in preparation for the Winter Olympics); these eventually became known as Operation Tarmac and are still ongoing. As of April 2004, DHS reported that 195 airports had been investigated and 5,877 businesses had been audited. Operation Tarmac investigators had checked the I-9 Employment Eligibility Verification forms or badging office records (or both) for about 385,000 employees and had found 4,918 unauthorized workers. As we discussed earlier in this report, when we obtained data on the specific immigration status of workers who were arrested or scheduled for deportation at 26 Operation Tarmac airports, we found that a substantial number were overstays (see table 2). Overstays had fraudulently gained access to the secure areas of all but one of the 26 airports reviewed. Of 607 unauthorized workers arrested at these airports, 182, or 30 percent, were overstays. Of these overstays, 19 percent were Mexican nationals and 38 percent were from other Latin American countries. A total of 10 unauthorized airport workers were arrested from special interest (NSEERS) countries, 5 of whom were overstays. (See app. VII for more complete Operation Tarmac nationality data.) The illegal immigrant workers with access to secure airport areas were employed by airlines (for example, at Washington Dulles International Airport and Washington Reagan National Airport, these included American, Atlantic Coast, Delta, Northwest, and United Airlines, as well as SwissAir and British Airways) and by a variety of other companies (for example, Federal Express and Ogden Services). Job descriptions included, among others, aircraft maintenance technician, airline agent, airline cabin service attendant, airplane fueler, baggage handler, cargo operations manager, electrician, janitorial supervisor, member of a cleaning crew, predeparture screener, ramp agent, and skycap. One overstay was employed in an airport badging office. Without fraud or counterfeit documents, illegal workers would not have been able to obtain these jobs and badges, allowing them access to secure areas. In the large majority of these cases, illegal immigrants had misused Social Security numbers and identity documents to illegally obtain airport jobs and security badges. A much smaller number of airport employees had misrepresented their criminal histories in order to obtain their jobs and badges. One DHS official emphasized that these were all serious security breaches because there was no way to know who these people actually were. Moreover, another DHS official told us that Operation Tarmac is likely not to have identified all illegal aliens working in secure areas of airports. Of the 4,918 unauthorized workers identified, 1,054 have been arrested, and the 3,864 others have left their airport jobs and eluded arrest. Sweeps similar to Operation Tarmac were subsequently initiated for a broad range of critical infrastructure components and special events, such as the Super Bowl (see table 3). The employees checked in these sweeps ranged from workers at nuclear power plants, military bases, pipelines, and special national events such as the Super Bowl to security officers for the Federal Protective Services, which guards federal buildings, and workers at sensitive national landmarks. Illegal immigrants committing identity fraud were found to be working at every one of these locations. Overstays were found to be working at two-thirds of these facilities and represented 20.7 percent of the unauthorized workers found by investigators of critical infrastructure sites. We asked DHS Immigration and Customs Enforcement (ICE) officials why there were more than 4,900 security breaches at airports, most of which involved illegal aliens. They stated that airport badging authorities did not routinely make rigorous checks. They stated that while badging authorities were able to check the FBI databases for criminal histories and terrorists on watch lists, they had no protocol for checking Social Security numbers and only a limited ability to verify immigration status. In contrast, Operation Tarmac and related critical infrastructure sweeps were joint federal operations that were able to do more rigorous, but still limited, checks because they had full access to DHS and Social Security Administration (SSA) data. We asked if this problem had been corrected as a result of Operation Tarmac. DHS officials stated that it had not. They stated that airport badging authorities still could not make these positive identification checks. They stated that, in effect, the airports knew who was not working there (that is, airports had checked for known terrorists and criminals) but not who was. Officials we interviewed from the Department of Transportation’s (DOT) Inspector General’s office and the U.S. Attorney’s office have also expressed their concern about this problem. With respect to the other security breaches at critical infrastructure sites, DHS officials told us that in many cases, the situation was similar to that described for airport badging authorities. Last, officials from the Transportation Security Administration (TSA) recently testified before the House Aviation subcommittee that persons employed at airports will not be subject to the CAPPS II screening that airline passengers will undergo—and are not now subject to physical screening—because TSA relies on employees’ rigorous background checks instead. It differs by airport, but the legislative requirement is that workers must be screened, and TSA policy is that screening can consist of background checks and credentialing procedures rather than physical screening. Operation Tarmac found airport background checks to have failed more than 4,900 times. Not all were overstays, but overstays do represent a substantial portion of the cases in which badged, unauthorized employees were identified. In the area of illegal immigration, reliable information remains elusive. Yet it is clear that the level of overstaying is significant and that the Form I-94 overstay tracking system contains important weaknesses. While we cannot quantify the risk to domestic security, we believe that efforts to ensure domestic security are affected to some degree by the level of overstaying that apparently occurs and by limitations in overstay tracking. This is illustrated by the employment of overstays at critical infrastructure locations. DHS recently initiated two efforts to develop improved systems, but challenges remain. Designing and implementing a viable and effective overstay tracking system is an important priority, not only because of its potential consequences for policy effectiveness but also because it could contribute to broader overstay control and enforcement efforts—and because it could enhance a layered defense. We provided a draft of this report to the Department of Homeland Security and the Department of Justice. Both agencies informed us that they had no comments. As agreed with your office, we plan no distribution of this report until 21 days after its issue date, unless you publicly announce its contents earlier. We will then send copies to the Secretary of the Department of Homeland Security, the Attorney General, appropriate congressional committees, and others who are interested. The report is also available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff would like to discuss any of the issues we present here, please call me at 202-512-2700 or Judith Droitcour, who served as project director on this study, at 202-512-9145. Other individuals who made key contributions to this report are Daniel Rodriguez, Eric M. Larson, Andrea Miller, and Mona Sehgal. Mexicans entering the United States with a border crossing card at the southwestern border who intend to limit their stay to 72 hours or less are not required to obtain a visa or to complete Form I-94 if they limit their travel to within a perimeter that is generally 25 miles from the border but that may extend up to 75 miles in Arizona (illustrated in fig. 6). DHS’s estimate of 7 million illegal U.S. residents as of January 2000 is based on the separate calculation of two estimates: (1) about 5.5 million illegal residents who arrived in the United States between 1990 and 2000 and were residing here, in illegal status, as of January 2000 and (2) about 1.5 million other illegal residents who arrived before 1990 and were here in illegal status as of January 2000. Because this estimate focuses on resident aliens, many aliens here for short periods (for example, 3 to 9 months) were likely to be excluded. The first component estimate—the estimate of 5.5 million here illegally as of January 2000 who arrived between 1990 and 2000—was derived through a “residual,” or subtraction, method. Using data from the decennial census (long form), DHS estimated the number of total foreign-born noncitizens residing here as of January 2000 who had arrived in each year from 1990 up to 2000. From this, DHS subtracted an estimate of legally resident foreign-born noncitizens, based on annual DHS administrative data—for example, the number of green cards issued each year, adjusted downward to account for deaths and return migration. Also subtracted were an estimate of the number of (1) residents who had applied for and would eventually receive legal status (estimated at 200,000) and (2) asylees, parolees, and persons with temporary protected status (TPS) who had received work authorization but not permanent resident status (377,000). These subtraction procedures yielded a “residual” estimate of illegal residents. The actual subtraction was conducted separately for each year. In this estimation process, DHS adjusted for various factors, such as census undercounts. According to the INS paper presenting the estimate of 7 million, the second component in this estimate—the estimate of 1.5 million illegal residents who had arrived before 1990—was derived from an estimate of 3.5 million illegal residents here in January 1990 and DHS’s estimate that, of the 3.5 million, 1.5 million had survived and remained here until 2000—without adjusting to legal status. To derive estimates for various countries or regions of origin (for example, Mexico or Asia), these procedures were carried out separately for 75 countries and for each region of origin. The component estimate of 5.5 million illegal residents who arrived in the past decade is based on residual estimation, which is a generally accepted demographic procedure; DHS also attempts to compensate for potential weaknesses in the source data—for example, compensating for some illegal immigrants’ avoidance of the decennial census. The other (second) component estimate—1.5 million illegal residents who had arrived before 1990—is based, in part, on the estimate of 3.5 million illegal residents (as of 1990). The methods and procedures used to make the 3.5 million estimate have not been described in any DHS publication. However, the 3.5 million estimate can be compared with another published estimate of the illegal immigrant population, derived from residual-based estimates, calculated with data from the 1990 census: 3.5 million for 1990. We have not evaluated this other published estimate; however, it is based on the generally accepted residual approach and was prepared by an expert in immigration statistics. DHS estimates that overstays constituted one-third of 7 million illegal immigrants residing in the United States as of January 2000. DHS’s one- third estimate is the result of a series of estimation procedures, described in this appendix. In 1994, INS published estimates of overstays for October 1992, using data from the Form I-94 overstay tracking system we describe in this report. The overstay tracking system generally does not include Mexican visitors entering the United States with a BCC at the Southwest land border who state that their intention is to limit their stay to 72 hours and not to travel beyond a set perimeter, generally 25 miles from the border. (See fig. 6.) The overstay tracking system also does not monitor Canadians admitted for up to 6 months, and there is no perimeter restriction for them. Such visitors who overstayed would not be included in an overstay estimate based on the I-94 data. Thus, from the very start, INS excluded some Mexican and Canadian overstays from its estimate. In using the I-94 data, INS recognized that many departure forms were missing (even when visitors had actually departed). Therefore, INS devised a way to estimate this missing-data factor, which was termed “system error.” As we explained in an earlier report. INS first identified as “index” countries those whose citizens were very unlikely to immigrate illegally to the United States—Sweden and Switzerland, among others. Then, using I-94 index-country data for a specific time period (that is, all index-country arrivals in a specific year, checked about 9 months after their initially required departure date), INS calculated the percentage of visitors from each index country who were “apparent overstays”—visitors for whom no matching record of departure could be found. Averaging this percentage across 12 index countries yielded a percentage figure (for example, 8 percent). Assuming that virtually no visitors from those 12 countries actually overstayed, INS took its calculated percentage (for example, 8 percent), plus a small margin of error, to represent a global level of “system error.” INS then calculated the percentage of apparent overstays for each nonindex country—Korea, Mexico, Poland, and so on—as of October 1992. From a particular country’s percentage of apparent overstays (for example, 12 percent), INS subtracted its global system error estimate (10 percent), yielding an estimated overstay rate for that country (for example, 2 percent). Any overstaying above the global “system error” was taken as an overstay flow estimate. Multiplied by the number of arrivals from a specific country in the designated year, this yielded the number of new overstays from that country. Estimation was limited to overstays remaining here for about a year or more. These data and procedures are the basis for all subsequent DHS overstay estimates. In our earlier report, we indicated a number of reasons why these overstay estimation procedures needed improvement. INS combined these estimates with estimates for total illegal residents—to address the question of what percentage of total illegal alien residents overstays represented as of October 1992. Importantly, this step was carried out separately for 99 countries, with the remaining countries grouped together in their respective continents of origin—for example, the rest of Asia. INS then applied the country-by-country October 1992 overstay percentages to project a later, October 1996 overstay estimate. Assuming no change between 1992 and 1996 in percentages of illegals estimated to be overstays in each country, INS multiplied the 1992 overstay percentages by newly estimated (1996) per-country estimates of numbers of total illegal immigrants. The same procedures (based on 1992 data) were carried out to yield country-by-country overstay estimates for January 2000. Summing the results across all country categories yielded an estimate of overstays— 2.3 million, or one-third of all illegal aliens residing here as of January 2000. Although we believe that the DHS estimation procedures contain several weak points, we were able to identify or develop three small-sample comparisons of illegal immigrants, as detailed in this report. These three “rough checks” indicated varying results—that 27 percent, 31 percent, and 57 percent of the illegal immigrants “sampled” were overstays. But taken together, they clearly suggest that some substantial percentage of illegal residents are overstays. With respect to possible trends, we do not believe that a reliable estimate of change in the overstay population can be based on DHS estimates. The estimate of overstays for 2000 is not based on any new overstay data; rather, it is based on (1) new data on the “total illegal population” as of January 2000 and (2) old data on what percentage of immigrants from each country or continent of origin are overstays. This means that we cannot identify two independent overstay estimates for the early 1990s and 2000. The weaknesses of DHS’s I-94 data system—that is, the reasons why false cases of overstaying are mixed with real cases—are discussed in the report. Tables 4 and 5 illustrate the resulting data. The following four prior recommendations to DHS concern overstay tracking, data, or estimates. 1. We recommended that to improve the collection of departure forms, the Commissioner of the Immigration and Naturalization Service should ensure that INS examine the quality control of the Nonimmigrant Information System database and determine why departure forms are not being recorded. This could involve, for example, examining a sample of the passenger manifest lists of flights with foreign destinations to determine the extent of airline compliance and, possibly, developing penalties to be levied on airlines for noncompliance. Discovery of the incidence of various causes of departure loss could allow a more precise estimation of their occurrence and the development of possible remedies. (U.S. General Accounting Office, Illegal Aliens: Despite Data Limitations, Current Methods Provide Better Population Estimates, GAO/PEMD-93-25 (Washington, D.C.: Aug. 5, 1993).) INS agreed in principle with our recommendation and fielded a pilot project to study why departure forms were not being collected. In 2002, INS discontinued the pilot because it was not yielding results INS had hoped for and because INS was in the process of designing an automated entry-exit system, which is now part of DHS’s US-VISIT program. If successfully implemented, US-VISIT could help identify overstays. We are monitoring the design and implementation of the US-VISIT program. 2. We recommended that the Commissioner of INS have new overstay estimates prepared for air arrivals from all countries, using improved estimation procedures such as those discussed in the report, including, as appropriate, the potential improvements suggested by INS or by reviewers of the report. (U.S. General Accounting Office, Illegal Immigration: INS Overstay Estimation Methods Need Improvement, GAO/PEMD-95-20 (Washington, D.C.: Sept. 26, 1995).) INS initially concurred and produced revised estimates as part of its comments on our report. However, in our response to INS’s comments, we described the new estimates as a “first step” and identified concerns about INS’s methodological procedures that we said needed further study. Recently, DHS told us that it has not further studied making overstay estimates for air arrivals. Valid estimation of overstays is extremely difficult, given current tracking system weaknesses. 3. We recommended that to promote compliance with the change of address notification requirements through publicity and enforcement and to improve the reliability of its alien address data, the Attorney General should direct the INS Commissioner to identify and implement an effective means of publicizing the change of address notification requirement nationwide. INS should make sure, in its publicity effort, that aliens are given information on how to comply with this requirement, including information on where change of address forms and other information may be available. (U.S. General Accounting Office, Homeland Security: INS Cannot Locate Many Aliens Because It Lacks Reliable Address Information, GAO-03-188 (Washington, D.C.: Nov. 21, 2002).) DHS concurred with this recommendation and has identified it as a long- term strategy that will require 2 years to fully implement. Since we made this recommendation less than 2 years ago, DHS has not had sufficient time to implement it fully. 4. We recommended that to provide better information on H-1B workers and their status changes, the Secretary of DHS take action to ensure that information on prior visa status and occupations for permanent residents and other employment-related visa holders is consistently entered into current tracking systems and that such information become integrated with entry and departure information when planned tracking systems are complete. (U.S. General Accounting Office, H-1B Foreign Workers: Better Tracking Needed to Help Determine H-1B Program’s Effects on U.S. Workforce, GAO-03-883 (Washington, D.C.: Sept. 10, 2003).) DHS concurred with this recommendation. Sufficient time has not elapsed for DHS to implement this recommendation. Appendix VII: Data from Operation Tarmac and Other Operations Nationality data are not available. Refers to multiple locations; landmark names are not specified for security reasons. The references in this appendix are full bibliographic citations keyed to the GAO report numbers listed in figure 1. Border Security: New Policies and Increased Interagency Coordination Needed to Improve Visa Process. GAO-03-1013T. Washington, D.C.: July 15, 2003. Border Security: New Policies and Procedures Are Needed to Fill Gaps in the Visa Revocation Process. GAO-03-798. Washington, D.C.: June 18, 2003. Information Technology: Terrorist Watch Lists Should Be Consolidated to Promote Better Integration and Sharing. GAO-03-322. Washington, D.C.: April 15, 2003. Weaknesses in Screening Entrants into the United States. GAO-03-438T. Washington, D.C.: January 30, 2003. Major Management Challenges and Program Risks: Department of State. GAO-03-107. Washington, D.C.: January 1, 2003. Border Security: Implications of Eliminating the Visa Waiver Program. GAO-03-38. Washington, D.C.: November 22, 2002. Border Security: Visa Process Should Be Strengthened as an Antiterrorism Tool. GAO-03-132NI. October 21, 2002. National Preparedness: Integrating New and Existing Technology and Information Sharing into an Effective Homeland Security Strategy. GAO-02-811T. Washington, D.C.: June 7, 2002. Embassy Security: Background Investigations of Foreign Employees. GAO/NSIAD-89-76. January 5, 1989. Security: Counterfeit Identification Raises Homeland Security Concerns. GAO-04-133T. October 1, 2003. Homeland Security: Risks Facing Key Border and Transportation Security Program Need to Be Addressed. GAO-03-1083. September 19, 2003. Land Border Ports of Entry: Vulnerabilities and Inefficiencies in the Inspections Process. GAO-03-1084R. August 18, 2003. Border Security: New Policies and Increased Interagency Coordination Needed to Improve Visa Process. GAO-03-1013T. July 15, 2003. Counterfeit Documents Used to Enter the United States from Certain Western Hemisphere Countries Not Detected. GAO-03-713T. May 13, 2003. Border Security: Challenges in Implementing Border Technology. GAO-03-546T. March 12, 2003. Technology Assessment: Using Biometrics for Border Security, GAO-03-174. November 15, 2002. H-1B Foreign Workers: Better Tracking Needed to Help Determine H-1B Programs Effects on U.S. Workforce. GAO-03-883. September 10, 2003. Information Technology: Homeland Security Needs to Improve Entry Exit System Expenditure Planning. GAO-03-563. June 9, 2003. Homeland Security: Justice Department’s Project to Interview Aliens after September 11, 2001. GAO-03-459. April 11, 2003. Homeland Security: Challenges to Implementation of the Immigration Interior Enforcement Strategy. GAO-03-660T. April 10, 2003. Homeland Security: INS Cannot Locate Many Aliens Because It Lacks Reliable Address Information. GAO-03-188. November 21, 2002. Immigration Benefits: Several Factors Impede Timeliness of Application Processing. GAO-01-488. May 4, 2001. Illegal Immigration: INS Overstay Estimation Methods Need Improvement. GAO/PEMD-95-20. September 26, 1995. Illegal Aliens: Despite Data Limitations, Current Methods Provide Better Population Estimates. GAO/PEMD-93-25. August 5, 1993. Social Security Administration: Actions Taken to Strengthen Procedures for Issuing Social Security Numbers to Noncitizens but Some Weaknesses Remain. GAO-04-12. October 15, 2003. Social Security Administration: Disclosure Policy for Law Enforcement Allows Information Sharing, but SSA Needs to Ensure Consistent Application. GAO-03-919. September 30, 2003. Social Security Numbers: Improved SSN Verification and Exchange of States’ Driver Records Would Enhance Identity Verification. GAO-03-920. September 15, 2003. Security: Counterfeit Identification and Identification Fraud Raise Security Concerns. GAO-03-1147T. September 9, 2003. Supplemental Security Income: SSA Could Enhance Its Ability to Detect Residency Violations. GAO-03-724. July 29, 2003. Social Security Numbers: Ensuring the Integrity of the SSN. GAO-03-941T. July 10, 2003. Identity Fraud: Prevalence and Links to Alien Illegal Activities. GAO-02-830T. June 25, 2002. Customs and INS: Information on Inspection, Infrastructure, Traffic Flow, and Security Matters at the Detroit Port of Entry. GAO-02-595R. April 22, 2002. INS Forensic Document Laboratory: Several Factors Impeded Timeliness of Case Processing. GAO-02-410. March 13, 2002. Identity Theft: Prevalence and Cost Appear to Be Growing. GAO-02-363. March 1, 2002. Identity Theft: Available Data Indicate Growth in Prevalence and Cost. GAO-02-424T. February 14, 2002. Illegal Aliens: Fraudulent Documents Undermining the Effectiveness of the Employment Verification System. GAO/T-GGD/HEHS-99-175. July 22, 1999. Illegal Aliens: Significant Obstacles to Reducing Unauthorized Alien Employment Exist. GAO/GGD-99-33. April 2, 1999. Homeland Security: Information Sharing Responsibilities, Challenges, and Key Management Issues. GAO-03-1165T. September 17, 2003. Transportation Security: Federal Action Needed to Help Address Security Challenges. GAO-03-843. June 30, 2003. Transportation Security Research: Coordination Needed in Selecting and Implementing Infrastructure Vulnerability Assessments. GAO-03-502. May 1, 2003. Transportation Security: Post-September 11th Initiatives and Long- Term Challenges. GAO-03-616T. April 1, 2003. Responses of Federal Agencies and Airports We Surveyed about Access Security Improvements. GAO-01-1069R. August 31, 2001. Aviation Security: Challenges Exist in Stabilizing and Enhancing Passenger and Baggage Screening Operations. GAO-04-440T. February 12, 2004. Aviation Security: Computer-Assisted Passenger Prescreening System Faces Significant Implementation Challenges. GAO-04-385. February 13, 2004. Aviation Security: Transportation Security Administration Faces Immediate and Long-Term Challenges. GAO-02-971T. July 25, 2002. Aviation Security: Information Concerning the Arming of Commercial Pilots. GAO-02-822R. June 28, 2002. Aviation Security: Long-Standing Problems Impair Airport Screeners’ Performance. GAO/RCED-00-75. June 28, 2000. Aviation Security: Vulnerabilities Still Exist in the Aviation Security System. GAO/T-RCED/AIMD-00-142. April 6, 2000. Aviation Security: FAA’s Actions to Study Responsibilities and Funding for Airport Security and to Certify Screening Companies. GAO/RCED-99-53. February 25, 1999. Aviation Security: Implementation of Recommendations Is Under Way, but Completion Will Take Several Years. GAO/RCED-98-102. April 24, 1998. Aviation Security: Posting Notices at Domestic Airports. GAO/RCED-97-88R. March 25, 1997. Aviation Safety and Security: Challenges to Implementing the Recommendations of the White House Commission on Aviation Safety and Security. GAO/T-RCED-97-90. March 5, 1997. | Each year, millions of visitors, foreign students, and immigrants come to the United States. Foreign visitors may enter on a legal temporary basis--that is, with an authorized period of admission that expires on a specific date--either (1) with temporary visas (generally for tourism, business, or work) or, in some cases, (2) as tourists or business visitors who are allowed to enter without visas. (The latter include Canadians and qualified visitors from 27 countries who enter under the visa waiver program.) The majority of visitors who are tracked depart on time, but others overstay--and since September 11, 2001, the question has arisen as to whether overstay issues might have an impact on domestic security. In this report, we (1) describe available data on the extent of overstaying, (2) report on weaknesses in the Department of Homeland Security's long-standing overstay tracking system, and (3) provide some observations on the impact that tracking system weaknesses and significant levels of overstaying may have on domestic security. Significant numbers of foreign visitors overstay their authorized periods of admission. Based in part on its long-standing I-94 system for tracking arrivals and departures, the Department of Homeland Security (DHS) estimated the overstay population for January 2000 at 2.3 million. But this estimate (1) excludes an unknown number of long-term overstays from Mexico and Canada, and by definition (2) excludes short-term overstays from these and other countries. Because of unresolved weaknesses in DHS's long-standing tracking system (e.g., noncollection of some departure forms), there is no accurate list of overstays. Tracking system weaknesses make it difficult to monitor potentially suspicious aliens who enter the country legally--and limit immigration control options. Post-September 11 operations identified thousands of overstays and other illegal immigrant workers who (despite limited background checks) had obtained critical infrastructure jobs and security badges with access to, for example, airport tarmacs and U.S. military bases. As of April 2004, federal investigators had arrested more than 1,360 illegal workers, while the majority had eluded apprehension. Together with other improvements, better information on overstays might contribute to a layered national defense that is better able to counter threats from foreign terrorists. A more comprehensive system, US-VISIT, the U.S. Visitor and Immigrant Status Indicator Technology, is being phased in. The design and implementation of US-VISIT, however, face a number of challenges. It is important that this new program avoid specific weaknesses associated with the long-standing system. Checking for these weaknesses might help identify difficult challenges in advance and--together with other efforts--enhance USVISIT's chances for eventual success as a tracking system. |
Since the 1960s, the United States has operated two separate operational polar-orbiting meteorological satellite systems: the Polar-orbiting Operational Environmental Satellite (POES) series, which is managed by NOAA, and the Defense Meteorological Satellite Program (DMSP), which is managed by the Air Force. These satellites obtain environmental data that are processed to provide graphical weather images and specialized weather products. These satellite data are also the predominant input to numerical weather prediction models, which are a primary tool for forecasting weather 3 or more days in all advance—including forecasting the path and intensity of hurricanes. The weather products and models are used to predict the potential impact of severe weather so that communities and emergency managers can help prevent and mitigate their effects. Polar satellites also provide data used to monitor environmental phenomena, such as ozone depletion and drought conditions, as well as data sets that are used by researchers for a variety of studies such as climate monitoring. Unlike geostationary satellites, which maintain a fixed position relative to the earth, polar-orbiting satellites constantly circle the earth in an almost north-south orbit, providing global coverage of conditions that affect the weather and climate. Each satellite makes about 14 orbits a day. As the earth rotates beneath it, each satellite views the entire earth’s surface twice a day. Currently, there are two operational POES satellites and two operational DMSP satellites that are positioned so that they can observe the earth in early morning, midmorning, and early afternoon polar orbits. Together, they ensure that, for any region of the earth, the data provided to users are generally no more than 6 hours old. Figure 1 illustrates the current operational polar satellite configuration. Besides the four operational satellites, six older satellites are in orbit that still collect some data and are available to provide limited backup to the operational satellites should they degrade or fail. In the future, the Air Force plans to continue to launch an additional DMSP satellite every few years; the last is currently expected to launch in 2012. NOAA plans to launch the final remaining POES satellite in 2009. Polar satellites gather a broad range of data that are transformed into a variety of products. Satellite sensors observe different bands of radiation wavelengths, called channels, which are used for remotely determining information about the earth’s atmosphere, land surface, oceans, and the space environment. When first received, satellite data are considered raw data. To make them usable, the processing centers format the data so that they are time-sequenced and include earth location and calibration information. After formatting, these data are called raw data records. The centers further process these raw data records into channel-specific data sets, called sensor data records and temperature data records. These data records are then used to derive weather and climate products called environmental data records (EDR). EDRs include a wide range of atmospheric products detailing cloud coverage, temperature, humidity, and ozone distribution; land surface products showing snow cover, vegetation, and land use; ocean products depicting sea surface temperatures, sea ice, and wave height; and characterizations of the space environment. Combinations of these data records (raw, sensor, temperature, and environmental data records) are also used to derive more sophisticated products, including outputs from numerical weather models and assessments of climate trends. Figure 2 is a simplified depiction of the various stages of satellite data processing, and figures 3 and 4 depict examples of EDR weather products. Specifically, figure 3 depicts a product used in weather forecasting, and figure 4 depicts a product used in climate monitoring. Figure 5 depicts a derived product that demonstrates how climate measurements can be aggregated over time to identify long-term trends. In commenting on a draft of this report, NOAA officials noted that while EDRs can be a valuable source of climate data, the scientific community also needs climate data records. These records require their own algorithms, data handling systems, and calibration/validation in order to ensure consistency in processing and reprocessing over years and decades. With the expectation that combining the POES and DMSP programs would reduce duplication and result in sizable cost savings, a May 1994 Presidential Decision Directive required NOAA and DOD to converge the two satellite programs into a single satellite program capable of satisfying both civilian and military requirements. The converged program, NPOESS, is considered critical to the United States’ ability to maintain the continuity of data required for weather forecasting and global climate monitoring through the year 2026. To manage this program, DOD, NOAA, and NASA formed the tri-agency Integrated Program Office, located within NOAA. Within the program office, each agency has the lead on certain activities: NOAA has overall program management responsibility for the converged system and for satellite operations; DOD’s Air Force has the lead on the acquisition; and NASA has primary responsibility for facilitating the development and incorporation of new technologies into the converged system. NOAA and DOD share the costs of funding NPOESS, while NASA funds specific technology projects and studies. Figure 6 depicts the organizations that make up the NPOESS program office and lists their responsibilities. The NPOESS program office is overseen by an executive committee that is made up of the administrators of NOAA and NASA and the Undersecretary of the Air Force. NPOESS is a major system acquisition that was originally estimated to cost about $6.5 billion over the 24-year life of the program from its inception in 1995 through 2018. The program is to provide satellite development, satellite launch and operation, and ground-based satellite data processing. These deliverables are grouped into four main categories: (1) the space segment, which includes the satellites and sensors; (2) the integrated data processing segment, which is the system for transforming raw data into EDRs and is to be located at the four processing centers; (3) the command, control, and communications segment, which includes the equipment and services needed to support satellite operations; and (4) the launch segment, which includes the launch vehicle services. When the NPOESS engineering, manufacturing, and development contract was awarded in August 2002, the cost estimate was adjusted to $7 billion. Acquisition plans called for the procurement and launch of six satellites over the life of the program, as well as the integration of 13 instruments— consisting of 10 environmental sensors and 3 subsystems. Together, the sensors were to receive and transmit data on atmospheric, cloud cover, environmental, climatic, oceanographic, and solar-geophysical observations. The subsystems were to support nonenvironmental search and rescue efforts, sensor survivability, and environmental data collection activities. The program office considered 4 of the sensors to be critical because they provide data for key weather products; these sensors are in bold in table 1, which describes each of the expected NPOESS instruments. In addition, a demonstration satellite, called the NPOESS Preparatory Project (NPP), was planned to be launched several years before the first NPOESS satellite in order to reduce the risk associated with launching new sensor technologies and to ensure continuity of climate data with NASA’s Earth Observing System satellites. NPP was to host three of the four critical NPOESS sensors, as well as one other no critical sensor and to provide the program office and the processing centers an early opportunity to work with the sensors, ground control, and data processing systems. When the NPOESS development contract was awarded, the schedule for launching the satellites was driven by a requirement that the satellites be available to back up the final POES and DMSP satellites should anything go wrong during the planned launches of these satellites. Early program milestones included (1) launching NPP by May 2006, (2) having the first NPOESS satellite available to back up the final POES satellite launch in March 2008, and (3) having the second NPOESS satellite available to back up the final DMSP satellite launch in October 2009. If the NPOESS satellites were not needed to back up the final predecessor satellites, their anticipated launch dates would have been April 2009 and June 2011, respectively. Over several years, we reported that NPOESS had experienced continued cost increases, schedule delays, and serious technical problems. By November 2005, we estimated that the cost of the program had grown from $7 billion to over $10 billion. In addition, the program was experiencing major technical problems with the VIIRS sensor and expected to delay the launch date of the first satellite by almost 2 years. These issues ultimately required difficult decisions to be made about the program’s direction and capabilities. The Nunn-McCurdy law requires DOD to take specific actions when a major defense acquisition program growth exceeds certain cost thresholds. Key provisions of the law require the Secretary of Defense to notify Congress when a major defense acquisition is expected to overrun its baseline by 15 percent or more and to certify the program to Congress when it is expected to overrun its baseline by 25 percent or more. In November 2005, NPOESS exceeded the 25 percent threshold, and DOD was required to certify the program. Certifying a program entails providing a determination that (1) the program is essential to national security, (2) there are no alternatives to the program that will provide equal or greater military capability at less cost, (3) the new estimates of the program’s cost are reasonable, and (4) the management structure for the program is adequate to manage and control costs. DOD established tri-agency teams—made up of DOD, NOAA, and NASA experts—to work on each of the four elements of the certification process. In June 2006, DOD (with the agreement of both of its partner agencies) certified a restructured NPOESS program, estimated to cost $12.5 billion through 2026. This decision approved a cost increase of $4 billion over the prior approved baseline cost and delayed the launch of NPP and the first 2 satellites by roughly 3 to 5 years. The new program also entailed reducing the number of satellites to be produced and launched from 6 to 4, and reducing the number of instruments on the satellites from 13 to 9— consisting of 7 environmental sensors and 2 subsystems. It also entailed using NPOESS satellites in the early morning and afternoon orbits and relying on European satellites for midmorning orbit data. Table 2 summarizes the major program changes made under the Nunn-McCurdy certification decision. The Nunn-McCurdy certification decision established new milestones for the delivery of key program elements, including launching NPP by January 2010, launching the first NPOESS satellite (called C1) by January 2013, and launching the second NPOESS satellite (called C2) by January 2016. These revised milestones deviated from prior plans to have the first NPOESS satellite available to back up the final POES satellite should anything go wrong during that launch. Table 3 summarizes changes in key program milestones over time. Delaying the launch of the first NPOESS satellite meant that if the final POES satellite fails on launch, satellite data users would need to rely on the existing constellation of environmental satellites until NPP data becomes available—almost 2 years later. Although NPP was not intended to be an operational asset, NASA agreed to move NPP to a different orbit so that its data would be available in the event of a premature failure of the final POES satellite. If the health of the existing constellation of satellites diminishes—or if NPP data is not available, timely, and reliable— there could be a gap in environmental satellite data. In order to reduce program complexity, the Nunn-McCurdy certification decision decreased the number of NPOESS sensors from 13 to 9 and reduced the functionality of 4 sensors. Specifically, of the 13 original sensors, 5 sensors remain unchanged (but 2 are on a reduced number of satellites), 3 were replaced with older or less capable sensors, 1 was modified to provide less functionality, and 4 were canceled. Table 4 delineates the changes made. Table 5 shows the changes to NPOESS instruments, including the 4 critical sensors identified in bold, and the planned configuration for NPP and the four satellites of the NPOESS program, called C1, C2, C3, and C4. Program officials acknowledged that this configuration could change if other parties decided to develop the sensors that were canceled. However, they stated that the planned configuration of the first satellite (C1) cannot change without increasing the risk that the launch would be delayed. The changes in NPOESS sensors affected the number and quality of the resulting weather and environmental products, called EDRs. In selecting sensors for the restructured program during the Nunn-McCurdy process, decision makers placed the highest priority on continuing current operational weather capabilities and a lower priority on obtaining selected environmental and climate measuring capabilities. As a result, the revised NPOESS system has significantly less capability for providing global climate measures than was originally planned. Specifically, the number of EDRs was decreased from 55 to 39, of which 6 are of a reduced quality. The 39 EDRs that remain include cloud base height, land surface temperature, precipitation type and rate, and sea surface winds. The 16 EDRs that were removed include cloud particle size and distribution, sea surface height, net solar radiation at the top of the atmosphere, and products to depict the electric fields in the space environment. The 6 EDRs that are of a reduced quality include ozone profile, soil moisture, and multiple products depicting energy in the space environment. In April 2007, we reported that while the program office had made progress in restructuring NPOESS since the June 2006 Nunn-McCurdy certification decision, important tasks leading up to finalizing contract changes remained to be completed. Specifically, the program had established and implemented interim program plans guiding the contractor’s work activities in 2006 and 2007 and had made progress on drafting key acquisition documents, including the system engineering plan, the test and evaluation master plan, and the memorandum of agreement between the agencies. However, executive approval of those documents was about 6 months late at that time—due in part to the complexity of navigating three agencies’ approval processes. We also reported that the program office had made progress in establishing an effective management structure, but that plans to reassign the Program Executive Officer would unnecessarily increase risks to an already risky program. Additionally, we found that the program lacked a process and plan for identifying and filling staffing shortages, which led to delays in key activities such as cost estimating and contract revisions. We reported that until this process is in place, the NPOESS program faced increased risk of further delays. To address these issues, we recommended that the appropriate agency executives finalize key acquisition documents by the end of April 2007 in order to allow the restructuring of the program to proceed. We also recommended that NPOESS program officials develop and implement a written process for identifying and addressing human capital needs and that they establish a plan to immediately fill needed positions. In addition, to reduce program risks, we recommended that DOD delay the reassignment of the Program Executive Officer until all sensors were delivered to NPP. The agencies’ response to these recommendations has been mixed. While the program office is still working to complete selected acquisition documents, program officials documented the program’s staffing process and have made progress in filling selected budgeting and system engineering vacancies. DOD, however, reassigned the Program Executive Officer in July 2007. A new Program Executive Officer is now in place. To effectively oversee an acquisition, project managers need current information on a contractor’s progress in meeting contract deliverables. One method that can help project managers track this progress is earned value management. This method, used by DOD for several decades, compares the value of work accomplished during a given period with that of the work expected in that period. Differences from expectations are measured in both cost and schedule variances. Cost variances compare the earned value of the completed work with the actual cost of the work performed. For example, if a contractor completed $5 million worth of work and the work actually cost $6.7 million, there would be a -$1.7 million cost variance. Schedule variances are also measured in dollars, but they compare the earned value of the work completed with the value of work that was expected to be completed. For example, if a contractor completed $5 million worth of work at the end of the month but was budgeted to complete $10 million worth of work, there would be a -$5 million schedule variance. Positive variances indicate that activities are costing less or are completed ahead of schedule. Negative variances indicate activities are costing more or are falling behind schedule. These cost and schedule variances can then be used in estimating the cost and time needed to complete the program. The program office has completed major activities associated with restructuring NPOESS, but key supporting activities remain—including obtaining approval of key acquisition documents. Restructuring a major acquisition program like NPOESS is a process that involves reassessing and redefining the program’s deliverables, costs, and schedules, and renegotiating the contract. The restructuring process also involves revising important acquisition documents such as the tri-agency memorandum of agreement, the acquisition strategy, the system engineering plan, the integrated master schedule defining what needs to happen by when, and the acquisition program baseline. During the past year, the program redefined the program’s deliverables, costs, and schedules, and renegotiated the NPOESS contract. To do so, the program developed a new program plan and conducted an integrated baseline review of the entire program, which validated that the new deliverables, costs, and schedules were feasible. It also completed key acquisition documents including the system engineering plan and the integrated master schedule. The program and the prime contractor renegotiated their agreement and signed a modified contract in July 2007. However, key activities remain to be completed, including obtaining executive approval of key acquisition documents. Specifically, even though agency officials were expected to approve key acquisition documents by September 2007, the appropriate executives have not yet signed off on documents including the tri-agency memorandum of agreement or the acquisition strategy report. They have also not signed off on the acquisition program baseline, the fee management plan, the test and evaluation master plan, and the two-orbit program plan (a plan for how to use European satellite data with NPOESS). Appendix II provides more information on the status of these documents. Program officials stated that the program has been able to renegotiate the contract and to proceed in developing sensors and systems without these documents being signed because the documents have widespread acceptance within the three agencies. They reported that the delays are largely due to the complexity of obtaining approval from three agencies. For example, program officials reported that an organization within DOD suggested minor changes to the tri-agency memorandum of agreement after months of coordination and after it had already been signed by both the Secretary of Commerce and the Administrator of NASA. The program office has now made the recommended changes and is re-initiating the coordination process. In addition, NASA disagreed with the fee management plan because it wanted to have an incentive associated with the on-orbit performance of the NPP satellite. The program office is currently trying to address NASA’s concerns, but stated that the current plan is in effect for this fiscal year and any changes would have to wait until fiscal year 2009. These disagreements further delay an already delayed restructuring process. Without executive approval of key acquisition documents, the program lacks the underlying commitment necessary to effectively manage a tri-agency program. In our prior report, we recommended that the appropriate executives immediately finalize key acquisition documents. This recommendation remains open. Over the last year, the NPOESS program has made progress by completing planned development and testing activities on its ground and space segments, but key milestones for delivering the VIIRS sensor and launching NPP have been delayed by about 8 months. Moving forward, risks remain in completing the testing of key sensors and integrating them on the NPP spacecraft, in resolving interagency disagreements on the appropriate level of system security, and in revising estimated costs for satellite operations and support. The program office is aware of these risks and is working to mitigate them, but continued problems could affect the program’s overall schedule and cost. Given the tight time frames for completing key sensors, integrating them on the NPP spacecraft, and getting the ground-based data processing system developed, tested, and deployed, it is important for the NPOESS Integrated Program Office, the Program Executive Office, and the Executive Committee to continue to provide close oversight of milestones and risks. Development of the ground segment—which includes the interface data processing system, the ground stations that are to receive satellite data, and the ground-based command, control, and communications system—is under way and on track. For example, the Interface Data Processing System has been installed at one of the two locations that are to receive NPP data, and the command, control, and communications system passed acceptance testing for use with NPP. However, important work in developing the algorithms that translate satellite data into weather products within the integrated data processing segment remains to be completed. Table 6 describes each of the components of the ground segment and identifies the program-provided risk level and status of each. Using contractor-provided data, our analysis of the earned value data for the ground segment indicates that cost and schedule performance were generally on track between March 2007 and February 2008 (see fig. 7). Between these dates, the contractor completed slightly more work than planned on both the IDPS and command, control, and communications components. In addition, the contractor finished slightly over budget for the IDPS component and slightly under budget for the command, control, and communications component. This caused cost and schedule variances that were less than 1 percent off of expectations. Over the past year, the program made progress on the development of the space segment, which includes the sensors and the spacecraft. Five sensors are of critical importance because they are to be launched on the NPP satellite. Initiating work on another sensor, the Microwave Imager Sounder, is also important because this new sensor—which is to replace the canceled Conical-scanned microwave imager/sounder sensor—will need to be developed in time for the second NPOESS satellite launch. Among other activities, the program has successfully completed ambient testing of the VIIRS flight unit, structural vibration testing of the flight unit of the Cross-track infrared sounder, risk reduction testing of the flight unit of the Ozone mapper/profiler suite, and thermal testing of the NPP spacecraft with three sensors on board. In addition, the program made decisions on how to proceed with the Microwave imager sounder and plans to contract with a government laboratory by the end of April 2008. However, the program experienced problems on VIIRS, including poor workmanship on selected subcomponents and delays in completing key tests. These issues delayed VIIRS delivery to the NPP contractor by 8 months. This late delivery will in turn delay the NPP launch from late September 2009 to early June 2010. This delay in NPP shortens the time available for incorporating lessons learned from NPP while it is in orbit into future NPOESS missions and could lead to gaps in the continuity of climate and weather data if predecessor satellites fail prematurely. Also, the Cross-track infrared sounder sensor experienced a cost overrun and schedule delays as the contractor worked to recover from a structural failure. The status and risk level of each of the components of the space segment is described in table 7. Our analysis of contractor-provided earned value data showed that the NPOESS space segment has experienced negative cost and schedule variances between March 2007 and February 2008 (see fig. 8). Specifically, the contractor exceeded cost targets for the space segment by $15.1 million —which is 5.1 percent of the $298.2 million space segment budget for that time period. Similarly, the contractor was unable to complete $2 million worth of work in the space segment—which is less than 1 percent of the space segment budget for that time period. Moving forward, the program continues to face risks. Over the next 2 years, it will need to complete the development of the key sensors, test them, integrate and test them on the NPP spacecraft, and test these systems with the ground-based data processing systems. In addition, the program faces two other issues that could affect its overall schedule and cost. One is that there continues to be disagreement between NOAA and DOD on the appropriate level of system security. To date, NPOESS has been designed and developed to meet DOD’s standards for a mission essential system, but NOAA officials believe that the system should be built to meet more stringent standards. Implementing more stringent standards could cause rework and retesting, and potentially affect the cost and schedule of the system. Another issue is that program life cycle costs could increase once a better estimate of the cost of operations and support is known. The $12.5 billion estimated life cycle cost for NPOESS includes a rough estimate of $1 billion for operations and support. Program officials have identified the potential for this cost to grow as a moderate risk. The NPOESS program office is working closely with the contractor and subcontractors to resolve these program risks. To address sensor risks, the program office and officials from NASA’s Goddard Space Flight Center commissioned an independent review team to assess the thoroughness and adequacy of practices being used in the assembly, integration, and testing of the VIIRS and CrIS instruments in preparation for the NPP spacecraft. The team found that the contractors for both sensors had sound test programs in place, but noted risks with VIIRS’s schedule and with CrIS’s reliability and performance. The program office adjusted the VIIRS testing schedule and is monitoring the CrIS testing results. In addition, the program office recently instituted biweekly senior-level management meetings to review progress on VIIRS’s development, and program officials noted that both the prime contractor and the program executive office will have senior officials onsite at the contractor’s facility to provide extensive, day-to-day oversight of management activities to assist in resolving issues. To address the risk posed by changing security requirements late in the system’s development, program officials commissioned a study to determine the effect of more stringent standards on the system. This study was completed by the end of March 2008, but has not yet been released. To address the risk of cost growth due to poor estimates of operations and support costs, DOD’s cost analysis group is currently refining this estimate. A revised estimate is expected by June 2008. The program office is aware of program risks and is working to mitigate them, but continued problems could affect the program’s overall schedule and cost. Given the tight time frames for completing key sensors, integrating them on the NPP spacecraft, and getting the ground-based data processing system developed, tested, and deployed, it is important for the NPOESS program office, the Program Executive Office, and the Executive Committee to continue to provide close oversight of milestones and risks. When the Nunn-McCurdy restructuring agreement removed certain sensors from NPOESS, the program was instructed to plan for the reintegration of those sensors. Specifically, the certification decision directed the program to build each NPOESS spacecraft with enough room and power to accommodate the sensors that were removed from the program and to fund the integration and testing of any sensors that are later restored. Agency sponsors external to the program are responsible for justifying and funding the sensor’s development, while the NPOESS Executive Committee has the final decision on whether to include the sensor on a specific satellite. Table 8 denotes sensors that were canceled under the Nunn-McCurdy agreement, but could be restored to the different satellites. The NPOESS program office has requested that any entities that plan to restore a sensor to an NPOESS satellite provide them 6 years’ notice. This includes 4 years for the sensor to be developed and tested, and 2 years for integration and testing on the spacecraft. Table 9 provides a listing of dates based on current launch dates for each NPOESS satellite. The program office developed its tentative timelines using historical data for similar programs, but program officials reported that more or less time might be required depending on the status of the sensor to be added. For example, a sensor based on existing sensors may require less time, while a more advanced sensor could require more time. NASA, NOAA, and DOD have taken preliminary steps to restore the capabilities of selected climate and space weather sensors that were degraded or removed from the NPOESS program by prioritizing the sensors, assessing options for restoring them, and making decisions to restore two sensors in order to mitigate near-term data gaps. However, the agencies have not yet developed plans to mitigate the loss of these sensors on a long-term basis. Best practices in strategic planning suggest that agencies develop and implement long-term plans to guide their short-term activities. Until such plans are developed, the agencies may lose their windows of opportunity for selecting cost-effective options or they may resort to an ad hoc approach to restoring these sensors. Lacking plans almost 2 years after key sensors were removed from the NPOESS program, the agencies face increased risk of gaps in the continuity of climate and space environment data. While NPOESS was originally envisioned to provide only weather observations, this mission was later expanded to include long-term continuity for key climate data. Maintaining the continuity of climate and space data over decades is important to identify long-term environmental cycles (such as the 11-year solar cycle and multiyear ocean cycles including the El Niño effect) and their impacts, and to detect trends in climate change and global warming. The Nunn-McCurdy restructuring decision removed four sensors and degraded the functionality of four other sensors that were to provide these data. DOD, NASA, and NOAA are now responsible for determining what to restore, how to restore it, and the means for doing so. This responsibility includes justifying the additional funding needed to develop these sensors within their respective agencies’ investment decision processes. Best practices of leading organizations call for defining a strategic plan to formalize priorities and plans for meeting mission goals. Such a plan would include the agency’s long-term goals for climate and space weather measurements, the short-term activities needed to attain these goals, and the milestones and resources needed to support the planned activities. Since the June 2006 restructuring, NASA, NOAA, and DOD have taken preliminary steps to restore sensor capabilities by determining priorities for restoring sensor capabilities, assessing options for obtaining sensor data over time, and making decisions to restore selected sensors. Specifically, in August 2006, the NPOESS Senior User Advisory Group—a group representing NASA, NOAA, and DOD system users—assessed the impact of the canceled or degraded sensors and identified priorities for restoring them. In January 2007, a NOAA and NASA working group on climate sensors prioritized which of the sensors were most important to restore for climate purposes and proposed possible solutions and mitigation efforts. In addition, the National Research Council (NRC) reported on the impact of the canceled sensors. Table 10 summarizes the results of these studies. In addition to prioritizing the sensors, NASA, NOAA, and DOD identified a variety of options for obtaining key sensor data over the next two decades and continue to seek other options. The agencies identified options including adding sensors back to a later NPOESS satellite, adding sensors to another planned satellite, and developing a new satellite to include several of the sensors. Examples of options for several sensors are provided in figure 9. In addition, in December 2007, NOAA released a request for information to determine whether commercial providers could include selected environmental sensors on their satellites. In addition to prioritizing sensors and identifying options, over the last year, NASA, NOAA, and DOD have taken steps to restore two sensors on a near-term basis. Specifically, in April 2007, the NPOESS Executive Committee decided to restore the limb component of the Ozone mapper/profiler suite to the NPP satellite and, in January 2008, to add the Clouds and the earth’s radiant energy system to NPP. These decisions are expected to provide continuity for these sensors through approximately 2015. NASA officials noted that they also took steps to mitigate a potential gap in total solar irradiance data by proposing to fund an additional 4 years of the SORCE mission (from 2008 to 2012). While NASA, NOAA, and DOD have taken preliminary steps to address the climate and space sensors that were removed from the NPOESS program almost 2 years ago, they do not yet have plans for restoring climate and space environment data on a long-term basis. The Office of Science and Technology Policy, an organization within the Executive Office of the President, is currently working with NASA, NOAA, and DOD to sort through the costs and benefits of the various options and to develop plans. However, this effort has been under way for almost 2 years and officials could not estimate when such plans would be completed. Delays in developing a comprehensive strategy for ensuring climate and space data continuity may result in the loss of selected options. For example, NASA and NOAA estimated that they would need to make a decision on whether to include a Total solar irradiance sensor on its planned Landsat Data Continuity Mission by March 2008, and on whether to build another satellite to obtain ocean altimeter data in 2008. Also, the NPOESS program office estimated that if any sensors are to be restored to an NPOESS satellite, it would need a decision about 6 years in advance of the planned satellite launch. Specifically, for a sensor to be included on the second NPOESS satellite, the sponsoring agency would need to commit to do so by January 2010. Without a timely decision on a plan for restoring satellite data on a long- term basis, NASA, NOAA, and DOD risk losing their windows of opportunity on selected options and restoring sensors in an ad hoc manner. Ultimately, the agencies risk a break in the continuity of climate and space environment data. As national and international concerns about climate change and global warming grow, these data are more important than ever to try to understand long-term climate trends and impacts. Over the past year, program officials have completed major activities associated with restructuring the NPOESS program and have made progress in developing and testing sensors, ground systems, and the NPP spacecraft. However, agency executives have still not signed off on key acquisition documents that were to be completed in September 2007, and one critical sensor has experienced technical problems and schedule delays that have led program officials to delay the NPP launch date by about 8 months. Any delay in the NPP launch date shortens the time available for incorporating lessons learned from NPP onto future NPOESS missions and could also lead to gaps in critical climate and weather data. When selected climate and space weather sensors were removed from the NPOESS program during its restructuring, NASA, NOAA, and DOD became responsible for determining what sensors to restore and how to restore them. This responsibility includes justifying the additional funding needed to develop these sensors within their respective agency’s investment decision processes. In the 2 years since the restructuring, the agencies have identified their priorities and assessed their options for restoring sensor capabilities. In addition, the agencies made decisions to restore two sensors to the NPP satellite in order to mitigate near-term data gaps. However, the agencies lack plans for restoring sensor capabilities on a long-term basis. Without a timely decision on a long-term plan for restoring satellite data, the agencies risk a break in the continuity of climate and space environment data. With the increased concern about climate change and global warming, these data are more important than ever to try to understand long-term climate trends and impacts. In order to bring closure to efforts that have been under way for years, we are making recommendations to the Secretaries of Commerce and Defense and to the Administrator of NASA to establish plans on whether and how to restore the climate and space sensors removed from the NPOESS program by June 2009, in cases where the sensors are warranted and justified. In addition, we are reemphasizing our prior recommendation that the appropriate NASA, NOAA, and DOD executives immediately finalize key acquisition documents. We received written comments on a draft of this report from the Secretary of the Department of Commerce (see app. III), the Deputy Assistant Secretary for Networks and Information Integration of the Department of Defense (see app. IV), and the Associate Administrator for the Science Mission Directorate of the National Aeronautics and Space Administration (see app. V). In their comments, all three agencies concurred with our recommendations. In addition, both the Department of Commerce and NASA reiterated that they are working with their partner agencies to finalize plans for restoring sensors to address the nation’s long-term needs for continuity of climate measurements. Further, Commerce noted that DOD and NASA executives need to weigh in to resolve issues at, or immediately below, their levels in order to ensure prompt completion of the key acquisition documents. NASA noted that difficulties in gaining consensus across all three NPOESS agencies have delayed the signature of key acquisition documents, and reported that they are committed to moving these documents through the signature cycle once all of the issues and concerns are resolved. All three agencies also provided technical comments, which we have incorporated in this report as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to interested congressional committees, the Secretary of Commerce, the Secretary of Defense, the Administrator of NASA, the Director of the Office of Management and Budget, and other interested parties. In addition, this report will be available at no charge on our Web site at http://www.gao.gov. If you have any questions on matters discussed in this report, please contact me at (202) 512-9286 or by e-mail at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. Our objectives were to (1) evaluate the National Polar-orbiting Operational Environmental Satellite System (NPOESS) program office’s progress in restructuring the acquisition, (2) assess the status of key program components and risks, (3) identify how much notice the program office would need if agency sponsors outside the program choose to restore the eliminated or degraded sensors to the NPOESS program, and (4) assess plans of the National Oceanic and Atmospheric Administration (NOAA), the Department of Defense (DOD), and the National Aeronautics and Space Administration (NASA) for obtaining the environmental data originally planned to be collected by NPOESS sensors, but then eliminated under the restructuring. To evaluate the NPOESS program office’s progress in restructuring the acquisition program, we reviewed the program’s Nunn-McCurdy certification decision memo, a later addendum to this decision, and program documentation including copies of required documentation, status briefings, and milestone progress reports. We also interviewed program office officials and attended senior-level management program review meetings to obtain information on the program’s acquisition restructuring. To evaluate the status of key program components and risks, we reviewed program documentation associated with the program and its key components. We analyzed briefings and monthly program management documents to determine the status and risks of the program and key program segments. We also analyzed earned value management data obtained from the contractor to assess the contractor’s performance against cost and schedule estimates. We obtained adequate assurance that these agency-provided data had been tested and were sufficient for our assessment purposes. We reviewed cost reports and program risk management documents and interviewed program officials to determine program and program segment risks that could negatively affect the program’s ability to maintain the current schedule and cost estimates. We also interviewed agency officials from NASA, NOAA, DOD, and the NPOESS program office to determine the status and risks of the key program segments. Finally, we observed senior-level management review meetings to obtain information on the status of the NPOESS program. To identify how much notice the program office would need if agency sponsors outside the program choose to restore the eliminated or degraded sensors to the NPOESS program, we reviewed the restoration requirements in the program’s Nunn-McCurdy certification decision memo and documentation related to the program’s planning efforts. We also interviewed senior officials in the NPOESS program office and the Program Executive Office to obtain information on program plans related to sensor restoration, the historical basis for these time frames, and the flexibility of these time frames for different sensor technologies. To assess agency plans for obtaining the environmental data originally planned to be collected by NPOESS sensors but then eliminated under the restructuring, we reviewed reports and briefings produced by NASA, NOAA, DOD, and the National Research Council on the impact of eliminated sensors and priorities for restoring them. We also interviewed agency officials from NASA, NOAA, and DOD, and sought and received answers to questions from the Office of Science and Technology Policy regarding decisions to restore two sensors to the NPOESS Preparatory Project (NPP) satellite. We primarily performed our work at the NPOESS Integrated Program Office and at DOD, NASA, and NOAA offices in the Washington, D.C., metropolitan area. In addition, we conducted work at NOAA offices in Suitland, Maryland, and at the Air Force Weather Agency in Omaha, Nebraska, because these sites will be the first two sites to host the NPOESS data processing system and to receive NPP data. We also conducted audit work at the Boulder, Colorado, facility of the contractor that is to integrate sensors on the NPP satellite. We conducted this performance audit from June 2007 to April 2008 in accordance with generally accepted government auditing standards. Table 11 identifies the key NPOESS acquisition documents as well as their original and revised due dates. Original due dates were specified in the June 2006 restructuring decision memo. The revised due dates were specified in an addendum to that memo, dated June 2007. Documents that are in bold are overdue. In addition to the contact named above, Colleen Phillips (Assistant Director), Carol Cha, Neil Doherty, Nancy Glover, Kathleen S. Lovett, and Kelly Shaw made key contributions to this report. | The National Polar-orbiting Operational Environmental Satellite System (NPOESS) is a triagency acquisition--managed by the Department of Commerce's National Oceanic and Atmospheric Administration (NOAA), the Department of Defense (DOD), and the National Aeronautics and Space Administration (NASA)--that has experienced escalating costs, schedule delays, and technical difficulties. These factors led to a June 2006 decision to restructure the program by reducing the number of satellites and sensors, increasing estimated costs to $12.5 billion, and delaying the first two satellites by 3 to 5 years. Among other objectives, GAO was asked to evaluate progress in restructuring the acquisition, assess the status of key program components and risks, and assess NASA's, NOAA's, and DOD's plans for obtaining the data originally planned to be collected by NPOESS sensors, but eliminated by the restructuring. To do so, GAO analyzed program and contractor data, attended program reviews, and interviewed agency officials. The program office has completed most of the major activities associated with restructuring the NPOESS acquisition, but key activities remain to be completed. In the past year, the program redefined the program's deliverables, costs, and schedules, and renegotiated the NPOESS contract. However, agency executives have not yet finalized selected acquisition documents (including the tri-agency memorandum of agreement). Without the executive approval of key acquisition documents, the program lacks the underlying commitment needed to effectively manage a tri-agency program. Over the past year, the NPOESS program has continued to make progress in completing development activities, but key milestones have been delayed and multiple risks remain. Specifically, poor workmanship and testing delays caused an 8-month slip in the expected delivery of a technologically complex imaging sensor that is critical to weather and climate observations. This later delivery caused a corresponding 8-month delay in the expected launch date of a demonstration satellite, called the NPOESS Preparatory Project (NPP). This demonstration satellite is intended to provide on-orbit experiences that can be used to reduce risks on NPOESS satellites and to provide interim weather and climate observations should predecessor weather and climate satellites begin to degrade or fail. Moving forward, risks remain in completing the testing of key sensors, integrating them on the NPP spacecraft, and ensuring sufficient system security. The program office is aware of these risks and is working to mitigate them, but continued problems could affect the program's overall schedule and cost. When the NPOESS restructuring decision removed four climate and space environment sensors from the program and reduced the functionality of four others, the program was directed to restore a limited version of one sensor and to restore the seven others if funded by entities outside the program office. NOAA, NASA, and DOD have taken preliminary steps to restore the capabilities of selected sensors by prioritizing the sensors, assessing options for restoring them, and making decisions to mitigate near-term data gaps by adding two sensors to the NPP satellite. However, the agencies have not yet developed plans to mitigate the loss of these and other sensors on a long-term basis. Until such a plan is developed, the agencies may lose windows of opportunity for selecting cost effective options or they may resort to an ad hoc approach to restoring these sensors. Almost 2 years have passed since key sensors were removed from the NPOESS program; further delays in establishing a plan could result in gaps in the continuity of climate and space environment data. |
Donors provide food aid primarily through procurements, vouchers, and contracts, most commonly working through international organizations, such as WFP, and NGOs. Procurements of food aid can be categorized geographically as International: Donor-financed purchases of food aid in world markets, which may include both developed and developing countries. For example, food purchased in Canada that is delivered to Uganda. Regional: Donor-financed purchases of food aid in a different country in the same region. For example, food purchased in South Africa that is delivered to Uganda. Local: Donor-financed purchases of food aid in countries affected by disasters and food crises. For example, food purchased in the southern part of Uganda that is delivered to the northern part of Uganda. Donors may also provide vouchers that allow recipients to purchase their own food in the local market. This option is usually used when food is available, but disaster-affected populations no longer have the income or livelihoods that would enable them to purchase food. WFP launched its first food voucher operation in Africa in February 2009, targeting 120,000 people who were suffering from the impact of high food prices in the urban areas of Ouagadougou in Burkina Faso. In addition, donors may contract with a commercial agent, such as a local trader, to purchase and deliver food aid. For example, in April 2009, the Canadian Foodgrains Bank (CFB) contracted with Kenyan traders to purchase food from sources outside the country and used several NGOs to distribute the food. As the volume of food aid in the form of in-kind commodities has declined, the volume of food aid purchased through cash donations has increased, as shown in figure 1. With the exception of the United States, most major donors—including the European Union, the United Kingdom, and most recently Canada—now provide all of their food aid as cash that may be used for local and regional procurement (LRP) by WFP and NGOs. Previously, the European Union’s food aid policies called for procuring food in the donor’s domestic market. In 1996, however, the European Union essentially eliminated restrictions that tied procurement of food aid to European suppliers as it restructured its food aid and food security budgets to focus on improving food security. In 2005, Canada took similar actions, providing 50 percent of its food aid budget in cash available for LRP. In 2008, Canada opted to provide 100 percent of its food aid budget in cash. Canada’s stated rationale for switching to 100 percent cash funding for food aid (which in some cases can still be used to purchase Canadian agricultural commodi- ties) was to provide more flexibility to its overall food security strategy, improve the efficiency of its food assistance, and contribute to the development of local and regional markets from which it purchases food aid. As one of the world’s largest food aid donors, Canada has provided an average of about $161 million (USD) to WFP, the Canadian Foodgrains Bank (CFB), and other NGOs annually over the last 4 years, with the vast majority of its contributions going to WFP. Many donors place conditions on their cash contributions to WFP, such as stating a preference for procurement of food in developing countries or for LRP. However, according to WFP, the availability of more flexible funding has significantly increased over the years, as donors have gradually shifted to providing food assistance as cash without tying such assistance to purchases from domestic food suppliers. In its strategic plan for 2008 through 2011, WFP identified use of LRP as one of its five main objectives. WFP’s stated policy is to purchase food aid at the most advantageous price available, taking into account the cost of transportation and shipping, with a preference for using LRP in developing countries wherever possible. WFP cited two primary goals for funding LRP: (1) to increase the efficiency of food aid delivery, expediting assistance to save lives during food emergencies and humanitarian crises; and (2) to support development by stimulating agricultural production and raising farm incomes, particularly by targeting smallholder farm households. As shown in figure 2, WFP LRP has consistently exceeded international procurement of food aid, principally for emergencies, from 2001 to 2008. WFP procurement in developing countries has been increasing, from $171 million in 2001 to over $1 billion in 2008 (see fig. 3). In 2008, WFP procured 78 percent of its food aid from developing countries and 22 percent from developed countries. Of the top 20 developing countries from which WFP procured food in 2008, 16 were in Africa and Asia. As shown in table 1, in 2007, 9 of the top 10 developing countries (including 8 in Africa and Asia) from which WFP procured food also received food aid the same year. Africa received 54 percent of total international food aid provided, and Asia received 29 percent of total international food aid provided in 2007. In an effort to address the global food crisis, donors have recently launched a number of initiatives, many of which specifically advance LRP of food aid (see app. II). These donors include multilateral organizations such as the UN, WFP, and the World Bank, and bilateral donors such as the United States. For example, in September 2008, WFP formally launched the Purchase for Progress (P4P) program, a $76 million pilot that is to be implemented in 21 countries, 15 of them in sub-Saharan Africa, in the next 5 years to improve the income of smallholder farmers and thereby increase their incentives for production. In July 2008, the Group of Eight (G8) issued a statement on global food security that called on donors to participate in making commitments to provide access to seed and fertilizers and help build up local agriculture by promoting local purchase of food aid. In April 2008, the World Bank’s New Deal for Global Food Policy called for changes including a shift from traditional in-kind food assistance to cash, vouchers, development assistance for local markets, and purchase of food from local farmers to strengthen their communities. Local and regional procurement (LRP) can offer donors a tool for reducing costs and shortening delivery time but faces multiple challenges. LRP can offer cost-saving opportunities over in-kind food aid from the United States if food is available in the recipient country or neighboring countries, and the cost of procuring locally or regionally is less than the cost of procuring and shipping from the United States. Additionally, LRP can save delivery time in emergency situations because it usually travels a shorter distance than in-kind food aid. Local procurement can also avoid delays that often occur when food crosses borders and has to go through permit and inspection processes. Despite these benefits, donors face challenges in making local and regional procurements, including insufficient logistics capacity that can contribute to delays in delivery, and weak legal systems that can limit buyers’ ability to enforce. Besides the benefit of reducing costs and delivery time, locally and regionally procured food may have the added benefit of being more culturally acceptable to recipients. However, evidence on how LRP affects donors’ ability to enforce food aid quality standards and product specifications has yet to be systematically collected. We found that locally and regionally procured food costs considerably less than U.S. in-kind food aid for sub-Saharan Africa and Asia, though the costs are comparable for Latin America. We compared the cost per ton of eight similar commodities for the same recipient countries in the same quarter of a given year and found that the average cost of WFP’s local procurements in sub-Saharan Africa and Asia was 34 percent and 29 percent lower, respectively, than the cost of food aid shipped from the United States (see fig. 4). For example, in the fourth quarter of 2002, the average cost of locally purchased wheat in Ethiopia was approximately $194 per metric ton while the cost of U.S. wheat shipped to Ethiopia in the same quarter was 38 percent higher, at approximately $312 per metric ton. Additionally, about 95 percent of WFP local procurements in sub- Saharan Africa and 96 percent in Asia cost less than corresponding U.S. in- kind food aid. However, the location of procurements affects whether LRP offers any cost-saving potential and if so, by how much. While local procurement in sub-Saharan Africa and Asia cost much less than U.S. in- kind food aid, we found that in Latin America, the cost of WFP LRP was comparable to the cost of food aid shipped from the United States. The average cost of WFP local procurements in Latin America was 2 percent higher than that of U.S. food aid, and the number of WFP’s transactions with a lower cost than U.S. food aid was close to the number of transactions with a higher cost. This difference is due in part to the fact that shipping from the United States to Latin America is usually less costly than shipping to Africa. Local and regional procurement can offer donors the flexibility to take advantage of cost-saving opportunities, which exist when food is available locally or regionally and the costs of purchasing and transporting it are lower than the costs of purchasing and shipping it from donor countries. Donors can purchase food aid from surplus-producing areas within the affected country, or purchase at the subregional and regional levels to meet localized needs. For example, to meet the needs of Uganda’s large internally displaced population in the north, WFP has been purchasing maize and beans from the surplus-producing areas that are in close proximity to the regions in need. In 2007, Uganda was the largest source for WFP procurements in terms of tonnage. From 2001 to 2008, WFP purchased over 600,000 metric tons of maize and beans locally to meet needs in Uganda. Similarly, to meet needs in Zimbabwe in 2008, WFP purchased a large amount of food aid that year from nearby countries including Malawi; Zambia; Mozambique; and South Africa, a surplus- producing country. South Africa was the largest source for WFP procurement in 2008, amounting to more than $163 million, and food from South Africa fed people in both nearby countries and internationally (see table 2). To ensure cost-effectiveness, donors can use the import parity price to guide purchase decisions. For example, WFP compares the lowest price potential sellers submit through its tender process with the import parity price, which includes the costs of commodity plus shipping and handling, from various potential procurement sites. WFP procures locally or regionally if the costs of doing so are below the import parity price. Recently, for a LRP funded by USAID, Save the Children compared the cost of locally or regionally procured wheat flour, vegetable oil, and lentils to the cost of in-kind food aid from the United States. It found that although locally procured wheat flour in Tajikistan had a higher price than U.S. wheat, the cost of commodity plus shipping was lower. According to WFP, LRP’s cost-effectiveness depends on many factors, such as the commodity, season, and exchange rates. For example, WFP often procures peas from Canada because of the availability and competitive pricing of these commodities in this market. In addition, a strong currency can hurt a country’s competitiveness. According to WFP officials, increases in the value of the South African currency partly contributed to WFP’s decision to decrease its purchases from South Africa in 2007 and then increase them in 2008 when the currency devalued. (Fig. 5 shows WFP procurement from South Africa from 2001 to 2008.) According to WFP data, LRPs in sub-Saharan Africa generally have a shorter delivery time than food aid procured internationally. We compared the median delivery time for LRP to the median delivery time for food aid either procured or donated internationally for 10 sub-Saharan countries. We selected these countries because they had received both LRP and international food aid. We found that international in-kind donation took the longest, averaging 147 days (see fig. 6). Local and regional procurements took on average 35 and 41 days, shortening the delivery time from international donations by 112 days and 106 days, respectively. For example, in Malawi, in- kind international donations took 4 months (167 days) while locally procured food aid took about 1 month (32 days). Similarly, the median delivery time for regionally procured food going to Zimbabwe was 48 days versus 114 days for internationally procured food aid. (For the delivery times of the 10 selected sub-Saharan African countries, see app. III.) Similarly, in a USAID-funded grant completed in April 2009, Save the Children was able to obtain wheat flour from Russia and Kazakhstan and transport it to Tajikistan within 2 months, while wheat flour from the United States took over 5 months to arrive in Tajikistan (see illustrative example in app. IV, which also compares the cost of U.S. in-kind food aid with the cost of LRP). USAID sent the other two commodities—yellow peas and fortified soybean oil—from its prepositioning site in Jacintoport (Texas) and was able to shorten the delivery time. It took around the same number of days for the yellow peas from the U.S. prepositioning site as the lentils procured within the region to arrive. According to DOT, prepositioning offers significant time savings. DOT’s analysis shows that sending U.S. prepositioned food could have reduced transit time in comparison to a regional purchase from South Africa that was delivered to Somalia. Locally and regionally procured food can take less time for delivery because it travels a shorter distance than internationally procured food and does not risk delays when crossing borders. Local procurement has the benefit of avoiding import processes, such as meeting recipient countries’ sanitary and phytosanitary requirements, which can delay delivery. For example, if imported maize does not meet a country’s moisture content requirement, delivery can be delayed. Some governments require imported food aid to go through additional testing and certification for genetically modified organisms (GMO). According to WFP officials in South Africa, these requirements can take an additional 2 to 3 weeks. Despite potential benefits, factors such as a lack of reliable suppliers, limited logistical capacity, weak legal systems, and donor funding restrictions have limited the efficiency of LRP, as explained below: Lack of reliable suppliers. Of the 11 WFP procurement officers we interviewed, 9 identified finding reliable suppliers and preventing supplier default as a challenge to implementing LRP. A World Vision representative in South Africa stated that the organization was involved in a local procurement in Mozambique that took 5 months because the supplier did not have food in stock and had to find alternative sources to purchase enough to fulfill his contract. When food was finally delivered, World Vision found that many bags were short of the quantity specified in the contract. Poor infrastructure and logistical capacity. Limited infrastructure and logistical capacity could delay delivery. For example, according to some WFP officials and private traders we met with, South Africa’s rail system and ports are underinvested and have limited capacity to handle food aid during peak seasons. Food aid could wait up to 2 months for a warehouse at the port of Durban. According to DOT, increasing regional procurements from South Africa could lead to more congestion at the port of Durban. DOT believes that in-kind food aid from the United States or prepositioning sites could avoid the port congestion in South Africa by going directly to the port of entry nearest the destination. In addition, trade barriers in developing countries could also delay delivery of food procured regionally. Weak legal systems. A weak legal system could limit buyers’ ability to enforce contracts. WFP generally requires suppliers to purchase bonds, which they will lose if they do not fulfill their obligations under the contracts. However, this requirement is not always feasible to implement, especially when procuring from small suppliers. For example, WFP usually eliminates its bond requirements for its purchases from smallholder farmers. Experts pointed out that it is critical to build in the time and cost of adequate quality testing and control, particularly in an environment where there are weak legal requirements for the producers or the exporting countries. For example, WFP’s procurement officer in Uganda told us that many of the smallholder farmers WFP purchases from had never seen a contract before, and WFP had to take actions to ensure that these purchases were delivered on time and met the quality specified in the applicable contracts. Timing and restrictions on donor funding. Timing and other restrictions on donor funding limit the flexibility of implementing partners to decide when, where, and how to purchase food, according to WFP procurement officers. If donor funding is not available when there is surplus in the market and prices are low, WFP cannot take advantage of market opportunities. A procurement officer in Sudan, for example, stated that, in January 2009, he was expecting 100,000 to 200,000 metric tons of high-quality commodities to be available on the market, but that he would only be able to purchase 20,000 metric tons due to the timing of donor funding. A WFP procurement officer in South Africa stated that, although he may be able to convince headquarters staff to let him use WFP’s advanced financing facility to make a purchase, he may encounter problems if the anticipated donor funding does not come through with its commitment. With donor support, WFP has begun to test flexible financing mechanisms that are expected to facilitate LRP. These include the advance financing facility, a mechanism with which WFP finances a specific project to mobilize food based on specific forecasts of donor contributions to the project, and a forward purchase facility, a mechanism that allows WFP to take a market position at an optimal time without specific knowledge of where the purchased food will go or which donor’s funding will underwrite the specific procurement action. Some officers also noted that some donors’ preference for LRP may result in procuring locally or regionally when importing might be less expensive. Local and regional procurement can provide food that is more acceptable to the dietary needs and preferences of beneficiaries in recipient countries. People tend to be more familiar with food grown in neighboring regions than food from different continents. For example, people in many African countries prefer white maize, and Ethiopians who receive yellow maize as food aid from the United States might sell it in the cattle market as feed, according to a WFP procurement officer in Ethiopia. Experts and practitioners have mixed views on how LRP affects donors’ ability to adhere to product specifications and quality standards—such as moisture content and the level of broken and foreign matter—which ensure food safety and nutritional content. However, donors have yet to systematically collect evidence that demonstrates whether food procured in different locations varies significantly in meeting product specifications and quality. Some experts contend that because locally and regionally procured food travels shorter distances and takes on average less time to arrive at its destination than internationally procured food aid, certain quality standards, such as moisture content, may be less critical. The longer grain has to travel, the more critical it is to control moisture content so that it does not become moldy and infested with insects. We have previously reported quality problems with U.S. food aid during long transit times. Regarding LRP food aid, 9 out of the 11 WFP procurement officers we interviewed for this review confirmed that quality was a challenge. They also noted, however, that some quality standards, which may often be difficult for suppliers in developing countries to meet, may not be very crucial to individual recipients. For example, due to the lack of modern processing facilities, rice from some developing countries may have a higher level of broken kernels, but some recipients may actually prefer such rice because it is better suited to cooking porridge, a common method of consumption. However, concerns persist about the quality of food procured in developing countries. The U.S. Wheat Associates noted that the ability to ensure food quality and safety could be jeopardized when purchases occur where standards are less rigorous than those of U.S. suppliers to food aid programs. We learned of a few examples of locally or regionally procured food not meeting quality standards. For example, representatives from WFP and NGOs told us that they had received food that turned out to be of lower quality or quantity than what was specified in the contract. A WFP procurement officer in South Africa reported that WFP requires the plants that manufacture maize meal or corn soy blend (CSB) to meet internationally accepted production standards, such as Hazard Analysis and Critical Control Point (HACCP), and hires surveyors to take samples for testing and assess whether the facility meets those standards. However, these surveyors recently found that 13 out of 15 maize meal plants were not in compliance with the standards and provided a list of activities the plants should undertake in order to improve. In addition, some factors that affect the efficiency of LRP also affect the ability to meet quality standards and product specifications. For example, a weak legal system limits buyers’ ability to enforce contracts, including imposing penalties when commodities delivered do not meet the specifications outlined in the contract. However, no evidence has been systematically collected on how LRP affects donors’ ability to adhere to quality standards and product specifications. A WFP official told us he does not believe there is any significant difference among different procurement types in the level of post-delivery loss, which is one measure of quality issues. However, WFP has not analyzed whether the quality issues are more severe for food procured locally or regionally versus food procured internationally. Local and regional procurement (LRP) has the potential to make food more costly to consumers in areas from which food is procured by increasing demand and driving up prices. While WFP has taken actions to help mitigate these impacts, such as coordinating with other implementing partners to gather market information, in some cases local purchases have adversely affected markets where the purchases were made. In particular, lack of reliable market intelligence—such as market prices, production levels, and trade patterns—makes it difficult to determine the extent to which LRP can be increased without causing adverse market impacts. Poorly functioning and unintegrated markets pose an additional challenge to avoiding adverse market impacts and expanding the use of LRP. Other challenges include lack of access to inputs and extension services, weak transportation infrastructure, and host government policies that inhibit food production. LRP can make food more costly to consumers by increasing demand and driving up prices. Although most of the WFP procurement officers we interviewed stated that local procurements of food aid generally do not affect market prices, our review of the literature and interviews during fieldwork show that there have been instances where LRP contributed to price hikes and price volatility in markets from which food is procured. However, the size of each of WFP’s local procurements tends to be small—on average about 298 metric tons, as compared with 671 metric tons for its international procurements. Additionally, WFP’s local procurements do not make up a large portion of the market for a food commodity in many developing countries, which reduces the risk of disrupting local markets. WFP’s local procurements of about 20,000 metric tons of maize in Burkina Faso in 2008, for example, amounted to less than 1 percent of a total market capacity of 700,000 metric tons. However, local procurements have also contributed to price hikes. In 2003, for example, when food aid donors tried to take advantage of low prices following 2 years of good harvests in Ethiopia, their purchases contributed to a rise in prices. Additionally, in 2003, WFP’s Uganda country office procured a large amount of locally grown maize from large traders based in Kampala in support of its operations in northern Uganda and in the Great Lakes region, particularly in Burundi. Due in part to this activity, maize prices in Kampala during this period were double those in Iganga (119 kilometers away). However, because maize is not a staple food in Uganda, consumers’ access to food may not have been adversely affected. WFP’s large local procurements in Uganda from a small number of large traders may also have contributed to an increase in the market power of those large traders. While local procurements of food aid have adversely affected markets in several developing countries, particularly in sub-Saharan Africa, almost all of the WFP procurement officers we interviewed stated that they supported the idea of the United States increasing its funding for LRP. However, WFP procurement officers we spoke to, NGO officials in countries we visited, and other experts we met with agreed that increased use of LRP should be done incrementally and that significant challenges remain to expanding market capacity in many countries, particularly in sub-Saharan Africa. Food Procurement in Developing Countries, World Food Program, Executive Board First Regular Session (Rome: February 2006). have also noted that market information for many countries is very difficult for WFP or NGOs to collect and rely on when making purchasing decisions. For example, a 2005 study commissioned by the United Kingdom’s Department for International Development (DFID) of local procurement in Ethiopia noted that market information at the smallholder farmer level was non-existent and that there was no formal system for determining the domestic price of grain. In efforts to significantly reduce the risk of contributing to price hikes and long-term food price inflation WFP uses import parity pricing, solicits tenders for small amounts of food early in the harvest season, and works with other parties involved in international food assistance to plan food aid interventions. In addition to serving as a measure for cost-efficiency, comparing local prices with import parity prices helps those involved in local procurement to determine whether a local procurement will “do no harm” to local markets and consumers by not making local procurements when local prices are higher than international prices. However, as a USDA study on LRP has noted, this standard may be constrained in cases where local prices for commodities are so much lower than import parity prices that it would require substantial price increases to reach the import parity threshold. WFP also tries to mitigate potential adverse market impacts by issuing tenders for small amounts of food early in the harvest season. Then, combined with available market intelligence, WFP determines whether its purchases have contributed to price hikes before putting out larger tenders. In addition to these tactics, WFP country offices work with other parties involved in food aid, such as donors, host government agencies, and NGOs, to coordinate efforts and share market information. For example, several WFP country offices in eastern and southern Africa created a country-by-country spreadsheet in the summer of 2008 to stay current on developments related to rapidly escalating food prices, such as government-imposed export bans. Even when market information is adequate, poorly functioning and unintegrated markets in sub-Saharan Africa and other developing countries still present challenges to expanding LRP while avoiding its potential adverse market impacts, according to food aid evaluations, experts convened for our roundtable, and fieldwork. Unintegrated markets are characterized by a lack of price transmission among markets. Additionally, there is difficulty in tracking informal cross-border trade and a lack of functioning commodity exchanges. When markets are not well- integrated, either within countries or regionally, large purchases of food by WFP, other food aid organizations, or donors can cause localized price hikes. For example, WFP officials in Burkina Faso noted that the government’s purchases for its strategic food reserve have correlated with price spikes. Because the markets for agricultural commodities in sub- Saharan Africa, in particular, are not always clearly defined and do not always account for natural geographic and ethnic boundaries, significant informal cross-border trading that does not heed international and regional trade agreements can occur. For example, approximately 30 to 50 percent of Uganda’s marketable surplus for maize is traded informally, often on bicycles across the borders to Kenya or Rwanda, according to WFP, USAID, and foreign government officials, and others we interviewed during fieldwork in Uganda. Additionally, WFP’s Uganda country office staff stated that it is difficult to effectively plan food aid interventions involving LRP in the neighboring Democratic Republic of the Congo due to lack of information about informal cross-border trading and volatile market conditions. The market effects of such trading can be difficult to track and create additional constraints to understanding and avoiding adverse price impacts when conducting LRP. Finally, in all of sub-Saharan Africa there is only one well-functioning agricultural commodity exchange, the South African Futures Exchange (SAFEX). Several countries are developing warehouse receipt systems that would allow farmers access to credit, but the countries face challenges such as farmers’ lack of awareness about marketing structures and banks’ reluctance to provide credit to farmers. Many of the factors that affect persistent food insecurity in sub-Saharan Africa and other developing countries are also challenges to the implementation and potential expansion of LRP. These factors include lack of access to inputs and extension services by farmers, weak transportation infrastructures, and weak or conflicting host government policies. As we reported in 2008, smallholder farmers in developing countries, particularly in sub-Saharan Africa, have limited access to modern inputs and agricultural extension services such as enhanced seeds, fertilizer, and tractors. During our fieldwork, representatives from several farmer groups and associations told us they had experienced similar problems. In Burkina Faso, one farmer group in a food-deficit area had stopped growing maize for lack of fertilizer and seed and had started planting more cotton because it could receive government subsidies for that crop. Weak transportation infrastructure in many developing countries makes it difficult for smallholder farmers to move their crops to market and for local markets to integrate regionally and nationally. The World Bank has reported that less than half of the rural population in sub-Saharan Africa lives near an all-season road. Policies of host governments are not always favorable to supporting agricultural development, although the Comprehensive Africa Agriculture Development Program (CAADP) aims to address the lack of agriculture development in sub-Saharan Africa by focusing on budget prioritization and policy restructuring. USAID’s Initiative to End Hunger in Africa (IEHA) supports CAADP’s efforts by coordinating with other donors to provide technical and policy support for agricultural and market development. These factors, combined with unreliable market intelligence and poorly functioning and unintegrated markets, continue to represent significant challenges to increasing LRP in many developing countries, particularly in sub-Saharan Africa. While the primary purpose of LRP is to provide food assistance in humanitarian emergencies in a timely and efficient manner, a potential secondary benefit is contributing to the development of the local economies from which food is purchased. This can be accomplished by increasing the demand for agricultural commodities, thereby increasing support for all levels of the commodity value chain, which includes individuals, businesses, and organizations involved in their respective agriculture production and marketing industries such as large traders, intermediate traders or middlemen, smallholder farmers, input suppliers, and processors. Figure 7 illustrates the agricultural commodity value chain supported by LRP. The development benefits to local economies are secondary because in almost all cases WFP and NGO purchases are not large enough or reliable enough to sustain increased demand over time. Only recently has WFP acknowledged that LRP can contribute to local development. In several of the countries we visited, we observed WFP LRP initiatives under way that might support local economies in the long term and connect LRP to other food security initiatives. However, many of them are new and limited in scale. For example, in February 2009, WFP began a cash voucher program in Burkina Faso that will target beneficiaries in two major cities, Ouagadougou and Bobo Dioulasso, by providing them with vouchers that are redeemable for food commodities. In September 2008, WFP launched its P4P program, which had the goal of benefiting smallholder farmers directly by purchasing food from them. However, WFP officials recognize that these procurements will only amount to a small percentage of its total local procurements. With initial funding to manage and administer P4P from the Bill & Melinda Gates and Howard G. Buffett Foundations, pilot programs have been approved in 21 countries. Certain legal requirements to procure U.S.-grown agricultural commodities for food aid and to transport up to 75 percent of them on U.S.-flag vessels may constrain agencies’ use of local and regional procurement (LRP). First, the Food for Peace Act supports in-kind food aid by specifying that funding under the Act can be used only to purchase U.S.-grown rather than foreign-grown agricultural commodities and thus cannot be used for LRP. Since 2002, appropriations for Title II of the Food for Peace Act have averaged $2 billion annually, none of which can be used to purchase foreign-grown food. However, from 2001 to 2008, through programs funded under a different authority, the Foreign Assistance Act, the U.S. government has provided approximately $220 million in total cash contributions to WFP that were used to purchase foreign-grown commodities. In addition, since July 2008, Congress has appropriated $50 million to USAID that can be used for LRP in addition to $75 million that the Administration allocated for LRP in International Disaster Assistance funding; and the 2009 Omnibus Appropriations Act provided another $75 million in development assistance funding to USAID for global food security, including LRP and distribution of food. Second, the Cargo Preference Act of 1954, as amended, which is enforced by the DOT, requires up to 75 percent of the gross tonnage of all U.S.-funded food aid to be transported on U.S.-flag vessels. There is disagreement among USAID, USDA, and DOT on how to interpret and implement certain requirements of cargo preference, such as the agency responsible for determining availability of U.S.-flag vessels. If these requirements remain ambiguous, U.S. agencies’ use of LRP could be constrained. While most funding for U.S. food aid cannot be used to purchase foreign- grown food, a limited amount of funding has been used to support LRP. Programs under the Food for Peace Act, have been the main vehicles of U.S. international food aid. However, funding under the Act is restricted to the purchase of U.S.-grown agricultural commodities. Title II of the Food for Peace Act, administered by USAID, is the largest U.S. international food aid program providing humanitarian donations to respond to emergency food needs or to be used in development projects. Since 2002, appropriations for Title II have averaged $2 billion annually, none of which can be used to purchase foreign-grown food, as envisioned by LRP. However, a limited amount of U.S. funding has been authorized through the 2008 Farm Bill, the Foreign Assistance Act, 2008/2009 bridge supplemental, and the 2009 Omnibus Appropriations. First, the 2008 Farm Bill established a 5-year, $60 million LRP pilot program, administered by USDA, to respond to emergencies and chronic food aid needs around the world. The pilot requires a study of LRP experiences, field- based projects, evaluations of field-based projects by independent parties, and a USDA report submitted to Congress by 2012. USDA is currently establishing guidelines for proposals to conduct field-based LRP projects and estimates completion of the guidelines by the end of summer 2009. Second, the Foreign Assistance Act authorizes USAID and State to provide cash contributions to WFP and implementing partners to purchase foreign- grown commodities for specific program goals. From 2001 to 2008, the U.S. government, through programs operating under the Foreign Assistance Act, has provided approximately $220 million in total cash contributions to WFP for 1,265 LRP transactions. WFP received contributions from State’s Bureau of Population, Refugees, and Migration (PRM); USAID Office of Foreign Disaster Assistance (OFDA); and USAID country missions; among other programs. State officials stated that LRP can be used to fill gaps in refugee and internally displaced persons (IDP) feeding operations caused by lack of donor support; inflows of new refugees and IDPs; inability of donors to deliver food to an area quickly, or more recently, rising costs of commodities and transportation. Similarly, officials from USAID agreed that LRP offers an opportunity to respond to food security crises and increase the total amount of food aid the United States can provide by filling gaps in country before food shipped from the United States arrives. Third, since July 2008, Congress has appropriated $125 million to USAID that can be used for LRP. USAID received $50 million in fiscal year 2008 supplemental appropriations to respond to the global food price crises with LRP, among other activities. Another $75 million in development assistance funding was made available to USAID through the 2009 Omnibus Appropriations Act for global food security, including LRP and distribution of food. For fiscal year 2009, the Administration made available for LRP $75 million in International Disaster Assistance funding. To implement LRP programs with increased authority, USAID/OFDA issued guidelines for LRP proposals in October 2008 specifying that organizations applying for funding must (1) demonstrate an urgent need for food aid; (2) relate to the factors associated with the emergency to the global food price crisis or to a declared disaster; or (3) provide compelling evidence that the use of local procurement will save lives, reduce suffering, and/or serve more people than by using international procurement of Title II food aid. By April 2009, USAID/OFDA had programmed $63 million in direct cash contributions to WFP and implementing partners to purchase foreign-grown commodities for vulnerable populations in Ethiopia, Kenya, Kyrgyzstan, Nepal, Pakistan, Somalia, Tajikistan, and Zimbabwe. Monitoring and evaluation plans for tracking program implementation, results, and outcomes are required with all awards. Because the leading U.S. food assistance agencies and DOT disagree on how to implement the Cargo Preference Act, their use of LRP could be constrained. The Cargo Preference Act, as amended, requires that up to 75 percent of the gross tonnage of agricultural foreign assistance cargo be transported on U.S.-flag vessels. DOT issues and administers regulations necessary to enforce cargo preference. Among other things, the department has the authority to require the transportation on U.S.-flag vessels of cargo shipments not otherwise subject to cargo preference (hereafter referred to as “make-up requirements”) when it determines that an agency has failed to sufficiently utilize U.S.-flag vessels. In some cases, however, USAID and USDA officials disagree with DOT on interpretations of cargo preference requirements including (1) the agency responsible for determining availability of U.S.-flag vessels; (2) make-up requirements when U.S.-flag vessels are unavailable or when an agency waives cargo preference requirements during emergencies, also referred to as “notwithstanding authority;” (3) applicability of cargo preference requirements to public international organizations; and (4) methodology used for cost reimbursements. Table 3 summarizes differences in agency officials’ interpretations of cargo preference requirements. The differences in agency interpretations of cargo preference are discussed below. 1. Agency responsible for determining availability of U.S.-flag vessels: Officials from USAID, USDA, and DOT stated that their respective agencies have independent authority to determine U.S.-flag vessels are not available. According to USAID officials, the agency determines U.S.-flag nonavailability based on its USAID program needs but seeks prior concurrence from DOT’s Maritime Administration (MARAD). According to USDA officials, USDA determines the availability of U.S.- flag vessels based on programmatic needs, and DOT determines what constitutes a fair and reasonable shipping rate. Agency officials and industry experts noted that the availability of U.S.-flag vessels in areas such as Africa’s eastern coast is limited. DOT noted that a U.S.-flag vessel could ship food from one African port to another if the ship happened to be in the region conducting military operations or other business. However, most carriers do not currently provide regular regional service. U.S. officials in Kenya and South Africa confirmed this lack of regular service along Africa’s eastern coast. A shipping agent in South Africa stated that she was aware of two U.S.-flag vessels that frequent the port of Durban. Representatives of a coalition of U.S.- flag carriers indicated that U.S.-flag vessels could provide additional service in the region in the future but their decision to relocate vessels depends on the regularity of regional shipments. According to a 2008 report regarding efforts to improve procurement planning, USAID and USDA compete with DOD and other exporters for space aboard the relatively few U.S.-flag vessels, some of which are ill-suited for t carriage of food-grade commodities. Moreover, of the three participating liner service container carriers utilizing U.S.-flag vessels, only one services Africa, where 54 percent of international food aid was delivered in 2007, according to INTERFAIS data. 2. Make up requirements when U.S.-flag vessels are unavailable or an agency uses “notwithstanding” authority: Agencies disagree as to whether shipments made on foreign vessels, because U.S.-flag vessels were not available or because an agency waives cargo preference requirements utilizing authority to conduct a program notwithstanding any other provision of law, should count toward the maximum tonnage allowed on foreign-flag vessels. DOT has stated that it should, and any tonnage shipped on foreign-flag vessels that exceeds the 25 percent maximum tonnage should be made up the following year. However, USAID has the authority to implement emergency programs, including international disaster assistance, notwithstanding any other provision of law. With this authority, USAID has waived cargo preference requirements to ensure food aid delivery during emergencies. In those cases, it believes the tonnage shipped on foreign-flag vessels should not be counted toward the maximum foreign-flag tonnage allowed under cargo preference. DOT officials believe otherwise. Since 2005, USAID has used notwithstanding authority to override cargo preference four times, two of which were in 2005 when there were extreme price disparities between U.S.-flag and foreign-flag offers to transport emergency food aid to Kenya and Somalia. 3. Applicability of cargo preference requirements to public international organizations: Agencies also disagree on whether grants made to international organizations, such as WFP, must incorporate cargo preference requirements. According to DOT officials, if public international organizations use U.S. funding to purchase food and that food requires ocean shipping, U.S.-flag vessels should be given cargo preference. For example, in 2006, DOT notified the USAID West-Bank/Gaza mission that it had not conformed to the legal mandate in a U.S.-funded grant with WFP to purchase 16,000 metric tons of wheat flour for shipment to Tel Aviv. However, according to the USAID policy manual, public international organizations are allowed to abide by their own procurement rules. Therefore, international organizations that receive cash contributions for regional procurement of food are not required to ship on U.S.-flag vessels. 4. Reimbursement methodology: DOT is required to reimburse food aid agencies for a portion of the ocean freight and transportation costs that exceed 20 percent of their total program costs. However, agencies disagree on whether reimbursement levels are sufficient to cover the additional costs incurred by transporting the food on U.S.-flag vessels. According to USDA officials, DOT has been reluctant to reimburse USDA for any excess costs beyond 20 percent freight costs and has not gone on the record about reimbursement for USDA’s LRP pilot field- based projects. According to USAID, areas of ambiguity regarding reimbursements include: costs of ocean freight and transportation on U.S.-flag vessels that exceed 20 percent of program costs, transportation from overseas food warehouses to final destinations, foreign inland transport costs, and costs of ocean freight and transportation on U.S.-flag vessels when there is no foreign-flag vessel available for cost comparison. With a lack of clarity on how to interpret cargo preference regulations, agencies’ ability to utilize LRP to respond to emergencies may be constrained. For example, as of October 2008, DOT has the authority to require the transportation on U.S.-flag vessels of cargo shipments not otherwise subject to cargo preference when it determines that an agency has failed to sufficiently utilize U.S.-flag vessels. DOT has not yet issued regulations governing how it will implement this new authority and USAID faces uncertainty regarding whether increased use of LRP will trigger imposition of make-up requirements. Cargo preference could also constrain USAID’s and USDA’s LRP pilot programs if U.S.-flag vessels are unavailable. USAID officials indicated that given the limited volume of regional shipments relative to regular Title II shipments, the agency would probably not be able to meet the U.S.-flag compliance threshold if even one shipment could not be transported on a U.S.-flag vessel. According to a USDA official, countries chosen for its LRP pilot field-based projects will likely receive food shipments only once in a fiscal year. If U.S.-flag vessels are unavailable for service at that time, it is unclear how USDA will make up tonnage by country and program the following year since, according to officials, the pilot is of limited duration. In addition, USDA will not cut other country program budgets in order to make up tonnage by country for its LRP program. Finally, the lack of clarity when USAID waives cargo preference through notwithstanding authority could constrain its ability to fully utilize the authority when responding to emergencies that require regional shipment of food in anticipation of potential sanctions by DOT. To date, USAID has used notwithstanding authority to waive cargo preference requirements on only four occasions, in part due to the uncertainty of a regulatory response from DOT. The $200 million that USAID has for LRP is available to be expended notwithstanding any other provision of law. According to USAID officials, the agency has not used its authority to waive cargo preference requirements for any of the LRP transactions funded through May 2009. The MOU that outlines the manner in which USAID, USDA, and DOT coordinate the administration of cargo preference requirements was last updated in 1987 and does not reflect modern transportation practices or the areas of ambiguity related to LRP. In our 2007 review of U.S. food aid, we found that cargo preference can increase delivery costs and time frames, with program impacts dependent on the sufficiency of DOT reimbursements. Therefore, we recommended that USAID, USDA, and DOT seek to minimize the cost impact of cargo preference regulations by updating implementation and reimbursement methodologies of cargo preference as it applies to U.S. food aid. Since 2007, USAID and USDA have proposed a working group with DOT to renegotiate the MOU. To date, however, there have been few meetings and no agreement has been reached between the agencies. The timely provision of food aid is of critical importance in responding to humanitarian emergencies and food crises. In 2007 and 2008, the number of chronically hungry people in the world grew by 115 million, despite an international commitment to halve the number of hungry people by 2015. While the United States has primarily provided in-kind food aid for over 50 years, it has been exploring expanded use of LRP. This tool has the potential to better meet the needs of hungry people by providing food aid in both a more timely and less costly manner. To fully realize this potential, however, challenges to its effective implementation must be addressed. Concerns about the quality of LRP food aid persist, but aid organizations still do not systematically collect evidence on LRP’s adherence to quality standards and product specifications that would ensure food safety and nutritional content. Furthermore, experts and practitioners caution that scaling up LRP in recipient countries should be done gradually to ensure that the potential benefits of LRP are maximized while any potential adverse impacts are minimized or avoided. While accurate and reliable market data would help ensure that U.S. agencies and implementing partners make optimal decisions with regard to when, where, and how to procure food locally or regionally, such data are not yet available. Finally, the implementation of LRP may be constrained by U.S. agencies’ disagreement on a number of requirements associated with cargo preference, thus elevating the importance of an updated interagency MOU that resolves existing ambiguities. To enhance the impact that LRP can have on the efficiency of food aid delivery and the economies of countries where food is purchased, we recommend that the Administrator of the U.S. Agency for International Development and the Secretary of Agriculture take the following three actions: systematically collect evidence on LRP’s adherence to quality standards and product specifications to ensure food safety and nutritional content; work with implementing partners to improve the reliability and utility of market intelligence in areas where the U.S.-funded LRP occurs, thereby ensuring that U.S.-funded LRP practices minimize adverse impacts and maximize potential benefits; and work with the Secretary of Transportation and relevant parties to expedite updating the MOU between U.S. food assistance agencies and the Department of Transportation, consistent with our 2007 recommendation, to minimize the cost impact of cargo preference regulations on food aid transportation expenditures and to resolve uncertainties associated with the application of cargo preference to regional procurement. DOT, USAID, USDA, and WFP provided written comments on a draft of this report. We have reprinted these agencies’ comments in appendixes VII, VIII, IX, and X, respectively, along with our responses. Additionally, USAID, DOT, State, and WFP provided technical comments on a draft of our report, which we have addressed or incorporated as appropriate. Treasury and MCC did not provide comments. USAID generally concurred with our recommendations. With regard to the first recommendation, however, USAID noted that it may be more efficient for us to recommend that all food aid organizations collaborate in the development and implementation of systems to monitor quality assurance and product specification issues in all food purchases, including LRP. The recommendation does not preclude such coordination among the agencies. We recognize USAID’s and USDA’s efforts to date to implement our 2007 recommendation to develop a coordinated interagency mechanism to update food aid specifications and products to improve food quality and nutritional standards. Including actions to systematically collect evidence on LRP’s adherence to quality will make these efforts more efficient. With regard to the third recommendation, USAID commented that MARAD’s position on the applicability of the 75 percent threshold to USAID-funded LRP, rather than the 50 percent threshold, is devoid of legal merit. In providing information on agencies’ interpretations of cargo preference requirements as they pertain to LRP, we sought to identify areas where agencies disagree on the applicability and interpretation of these requirements. We did not attempt to adjudicate the differences in interpretation among the agencies involved. However, in technical comments to a draft of this report, DOT changed its position regarding thresholds and now concurs with USAID’S interpretation, thus eliminating this issue as an area of ambiguity. USDA generally agreed with our report, noting that our comparisons of costs and delivery times were insightful. However, USDA observed that aggregating some of the commodities such as vegetable oil and beans could cause a loss of precision in our methodology. To obtain an overall picture of costs, we worked to ensure that we had the largest number of observations, over the longest possible time period, so some aggregation was required. USDA also stated that our report does not specify how differences in quality or specifications were handled. We recognize that the price of different commodities in the same category may vary depending on quality or specifications. However, we noted WFP’s assertion that its commodities meet both the importing and exporting countries’ standards, and there is no systematic evidence that U.S. commodities differ in quality compared to LRP commodities. Nonetheless, we recognize that there may be differences in the quality of certain commodities, and we note such differences in our illustrative example of LRP for Tajikistan. In addition, both USDA and DOT noted that we did not compare delivery times for LRP and in-kind food aid from prepositioning sites. Although we did not differentiate prepositioned commodities in our cost comparison, we included them in our data analysis and note that prepositioned commodities were a very small part of U.S. food aid during the time period we examined. DOT stated that additional analysis may be warranted before concluding that LRP offers a tool to reduce costs and shorten delivery time. Although further analysis of LRP practices would be useful, our analysis demonstrated consistent results across 8 years of data. For example, local procurement in sub-Saharan Africa cost about 34 percent less than USAID commodities procured at around the same time and delivered to the same country. DOT also stated that it implements the cargo preference statute through regulation, not through an interagency MOU. While this is true, the regulations contain ambiguities that have previously required resolution through an MOU. Our report describes new ambiguities that could arise in applying cargo preference in the context of regional procurement. We believe that these ambiguities can be resolved by updating the MOU. Further, there is no requirement that establishing regulation precede a MOU nor does a MOU preclude the issuance of new regulation. The updated MOU, establishing consensus among the relevant agencies, could be reflected in any future regulation that DOT may draft and get finalized through the rule-making process. WFP welcomed our timely examination of LRP as one of numerous tools to deliver effective and efficient food assistance to those in greatest need. However, WFP stated it was perplexed that concerns persist about the quality of food procured in developing countries, given the lack of evidence showing that LRP introduces quality challenges that are not already challenges to internationally procured and donor-provided food aid. We note that quality is one issue that many WFP procurement officers and several other officials we interviewed identified as a challenge for LRP. However, the lack of systematically collected data makes it difficult to objectively analyze how LRPs adhere to quality standards and product specifications. Our first recommendation addresses this issue. In addition, WFP offered some qualifications to our discussion of the impact of LRP on economies where food is procured, noting the lack of systematic evidence to suggest that current LRP practices adversely impact host markets. In this report, we explain several efforts that WFP and others have taken to significantly improve the availability and reliability of market intelligence in developing countries. Nonetheless, WFP, NGOs, U.S. agencies, host governments, and experts convened for our roundtable stated that the most significant challenge to avoiding potential adverse markets impacts when conducting LRP is unreliable market intelligence. Therefore, we are recommending improving the reliability and utility of market intelligence. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. We are sending copies of this report to interested Members of Congress, the Administrator of USAID, and the Secretaries of Agriculture, State, Transportation, and the Treasury. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-9601 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix XI. Our objectives were to determine (1) the impact of local and regional procurement (LRP) on the efficiency of food aid delivery, (2) the impact of LRP on economies where food is procured, and (3) U.S. legal requirements that could affect U.S. agencies’ use of LRP. We selected four countries for fieldwork based on geographic region, WFP procurement data, and the presence of WFP procurement officers in- country. We selected countries in sub-Saharan Africa, excluding countries with current conflict, because these regions within Africa have high prevalence rates of undernourishment. While this selection is not representative in any statistical sense, it ensured that we had variation in the key factors we considered. We do not generalize the results of our fieldwork beyond that selection, using fieldwork primarily to provide illustrative examples. To understand the experiences of other donors with local and regional food procurement and corroborate information gathered in our literature review, we conducted semi-structured interviews with 11 principal WFP procurement officers in Africa and Asia. We focused on Africa and Asia because that is where the majority of food procurement abroad takes place. The 11 we interviewed represented all the principal WFP procurement officers that were in place in Asia and Africa at the time we conducted our fieldwork. We asked each procurement officer a series of open-ended questions on the factors impacting and actions that could be taken to improve: cost, delivery time, quality, market impact, and development. To ensure that the questions were clear and unambiguous, did not place an undue burden on respondents, and that respondents had the necessary information and time to answer the questions, we conducted pre-tests with WFP procurement officers in Sudan and Thailand. To determine which factors and actions were mentioned most frequently, we coded the officer’s responses to the questions. One analyst developed and applied the codes to the interviews and another analyst reviewed both the codes and their application. Based on that coding, we report data on the number of officers that mentioned each factor and action. The views we report are limited to WFP procurement officers in Africa and Asia and may not represent WFP procurement officers in other regions. In addition, we reviewed economic literature on LRP practices and recent reports, studies, and papers issued by U.S. agencies, multilateral organizations, and bilateral donors. These sources were chosen because they represent a wide cross section of the discussion on LRP and are written by the leading authorities and institutions working in the field. In the four African countries that we selected for fieldwork—Kenya and Uganda in East Africa, South Africa in southern Africa, and Burkina Faso in West Africa—we met with U.S. Agency for International Development (USAID) and other U.S. officials; World Food Program (WFP) country office staff; and representatives of nongovernmental organizations (NGO), smallholder farmer groups, and commodity exchanges. We also visited several sites where food aid may be locally purchased and where food aid is delivered. In Washington, D.C., we interviewed officials from U.S. agencies, including USAID; USDA; the Departments of State, Transportation (DOT), and the Treasury; and the Millennium Challenge Corporation (MCC). We also met with the International Food Policy Research Institute (IFPRI) and the World Bank. In New York, we met with the Rockefeller Foundation, the Alliance for a Green Revolution in Africa (AGRA), and Columbia University. In Rome, we met with FAO, WFP, and the International Fund for Agricultural Development (IFAD). We also met with the U.S. Mission to the UN (USUN) in Rome and several bilateral donors’ permanent representatives to the Rome-based UN food and agriculture agencies. In addition, in Washington, D.C., we convened a roundtable of 10 experts and practitioners—including representatives from academia, research organizations, multilateral organizations, NGOs, and others—to further delineate, based on our initial work, some of key issues and challenges to the implementation of LRP. To examine the impact of LRP on the efficiency of food aid delivery, we focused on the cost, delivery time, and quality. To evaluate LRP cost efficiency, we compared WFP’s costs with USAID’s. WFP’s costs are based on WFP’s procurement data from 2001 to 2008 and USAID’s costs are based on USAID’s Line 17 reports from fiscal year 2001 to 2008. We did not evaluate the impact of prepositioning on U.S. food aid costs, although we did not exclude the commodities shipped from prepositioning sites, albeit small in value relative to overall U.S. food aid for the time period we examined. WFP’s procurement data include information on the commodities purchased, the date of the purchase, the origin of the commodities, the recipient of the food aid, the contract terms, and the purchase prices. To assess the reliability of the data, we (1) reviewed existing documentation related to the data sources and (2) interviewed WFP and USAID officials familiar with the data sources. Accordingly, we determined that the data were sufficiently reliable for the purposes of this report. Since WFP’s procurements are under different contract terms, the purchase prices include different costs. For example, most of WFP’s international procurements are under the term free on board (FOB), which normally does not include ocean shipping and handling. USAID’s data include the costs for commodities and ocean shipping and inland transportation, storage, and handling (ITSH). To make the costs comparable, we included different USAID cost components depending on the contract terms of the corresponding WFP purchase. See table 4 for details of the corresponding WFP contract terms and USAID cost components. For each WFP purchase, we searched for a “match” in USAID’s data. A match is defined as a purchase transaction of a similar commodity, in the same quarter of the same year, for the same recipient country. The commodity groups we selected are beans, corn soy blend (CSB), maize, maize meal, rice, sorghum/millet, vegetable oil, and wheat, which represent the majority of food aid for both WFP and USAID. We aggregated the more detailed commodities in USAID’s data. For example, we aggregated many types of beans (red beans, kidney beans, black beans, pinto beans, and other beans) into the bean commodity group. We compared the WFP’s per metric ton cost with its match of USAID’s cost. See table 5 for the number of matches in our analysis, which occurred for 8 commodities out of approximately 37,000 transactions from 2001 to 2008. We compared the costs by region (sub-Saharan Africa, Asia, and Latin America) and by procurement type (local, regional, and international). To account for DOT cargo preference reimbursements, we reduced USAID ocean freight costs from 25 to 35 percent and found that it did not change our results significantly. Based on previous GAO work, we consider 25 percent to be a reasonable value to account for cargo reimbursements over the 8-year period. We analyzed the percentage of WFP transactions that had lower costs than USAID’s and the cost differential. See fig. 8 below for a histogram of cost differential comparison. The cost differences between U.S. food aid and LRP of similar food products, around the same time frame, and for the same countries we identified represent potential cost-saving opportunities. However, many factors can reduce or even eliminate the amount of savings, including whether food is available in the local and regional markets, and how much additional purchases in these markets will drive up prices. We discussed this methodology at the expert roundtable we conducted, and the experts indicated that our methodology was sufficient in controlling for various factors that may influence costs to make the costs comparable. To evaluate the impact of LRP on delivery time, we relied on interviews with WFP officials and representatives from various organizations we met with during fieldwork in the four countries we visited. In addition, WFP generated delivery time for 10 countries in sub-Saharan Africa that we selected by procurement type. The countries that we selected had received food aid purchased or donated internationally, as well as through LRP. Our analysis of the aggregate delivery time consisted of the average of the median delivery times for each of the 10 countries across the four procurement types. To evaluate the impact of LRP on the quality, we interviewed U.S. agency officials, WFP officials, and NGO representatives. We reviewed assessments of WFP local and regional procurement. We discussed with WFP the methodology it used in order to generate the delivery time and the limitations of the methodology. We determined the data are sufficiently reliable for our purposes. We chose to use WFP data because they included a substantial amount of both international and local and regional procurements. We did not compare WFP’s delivery time to U.S. in-kind delivery time. We also did not evaluate the impact of prepositioning on U.S. food aid delivery time. To examine the impact of LRP on the economies of countries where food is procured, we relied on the responses of WFP procurement officers to our semi-structured interview questions; our economic literature review of LRP practices, reports, studies, and papers’ and our interviews with WFP, U.S. government, NGO, World Bank, and private-sector officials in Washington, D.C.; Rome; and the countries we visited for fieldwork in sub- Saharan Africa. We also discussed our preliminary findings on the potential market risks, market intelligence, and development benefits associated with LRP at our expert roundtable and received validation and further input. To examine U.S. legal restrictions that could affect U.S. agencies’ use of LRP, we reviewed U.S. programs authorized in the 2008 Farm Bill, the Food for Peace Act of 1961, the Foreign Assistance Act, and the 1954 Cargo Preference Act, as amended, and appropriations for fiscal years 2002 to 2008. To better understand agency interpretations of applicability of cargo preference, we collected information from USAID, USDA, and DOT officials with regard to U.S.-flag vessel availability, compliance thresholds, notwithstanding authority, and application to international organizations. The information on foreign law in this report does not reflect our independent legal analysis but is based on interviews and secondary sources. We conducted this performance audit from June 2008 to May 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The following is a summary of some of the key donor food security initiatives in recent years, many of which support LRP. (See fig. 9.) To evaluate the impact of local and regional procurement on delivery time, we relied on lead time data provided by WFP for 10 countries in sub- Saharan Africa that we selected, all of which had received locally and regionally procured food aid and food aid donated internationally. The delivery time (also referred to as “lead time”) reflects the number of days elapsed between the date of the purchase order and the date WFP took possession of the food in the recipient country. The data cover the period from 2004 to 2008. As shown in figure 10, international in-kind donations took the longest time, averaging 147 days. Local and regional purchases took on average 35 and 41 days, shortening the lead time from international donations by 112 days and 106 days, respectively. Vegetable oil (fortified) Vegetable oil (unfortified) April 2009 Commodities may not be identical. For example, the protein level for U.S. wheat flour may be different from the wheat flour from Kazakhstan. Soybean vegetable oil from the United States is fortified, while cotton seed oil from Russia is not fortified. Yellow peas were provided from the United States, and lentils were provided from Russia. U.S. in-kind food aid was a USAID Single-Year Assistance Program funded through Title II of the Food for Peace Act. LRP was funded through USAID’s 2008 supplemental appropriations. To identify factors that could limit the efficiency of LRP, steps WFP has taken to improve the efficiency of LRP, and factors that limit or strengthen the positive development impacts of LRP, we conducted semi-structured interviews with 11 WFP procurement officers in Africa and Asia. Figure 12 lists the factors that WFP procurement officers reported limit the efficiency of LRP and steps they identified could improve and ensure the efficiency of LRP. Strengthening agricultural infrastructure includes actions such as provision of raw materials or establishing bagging and processing facilities. Figure 13 lists factors that limit the positive development impacts LRP could have and actions to improve or strengthen such impacts. The WFP procurement officers discussed topics, some of which we had identified as factors affecting food security in a previous report, namely, agricultural productivity, rural development, and governance, as shown below. Officers also discussed specific characteristics of WFP business practices. Agricultural productivity. Several officers reported that small farmers’ lack of access to inputs and markets or the underdeveloped nature of agricultural markets more generally limits their ability to create positive development impacts with LRP. Up to seven officers suggested actions to improve agricultural productivity. For example, the Thailand officer suggested actions to support small farmers in Laos by providing training on the corn soy production process. The Pakistan officer suggested strengthening agricultural markets by establishing seed nurseries. Rural development. Two officers indicated that poor rural development, such as inadequate land holdings or inadequate access to information in remote areas, limits their ability to create positive development impacts with LRP. However, eight officers suggested actions to strengthen rural development through, for example, providing equipment to dry grain or educating communities on food fortification. Governance. Several officers noted that due to the relatively small size of LRPs, particularly those conducted through WFP’s P4P program, their ability to result in positive development impacts are limited and ultimately depends on whether local governments also have sound agricultural policies in place that support LRP. WFP business practices. Four officers mentioned that imperfect market impact information is a challenge to creating positive development impacts with LRP. Four officers discussed the importance of market impact monitoring. Two officers also suggested changes to WFP business practices, such as merging LRP with the P4P program. This appendix summarizes selected studies on LRP, including several analytical studies conducted by WFP to assess its use of LRP in sub- Saharan Africa (see table 6). The studies describe the types of markets and the trading environment in which LRP is conducted, as well as the impact of LRP on local markets and the extent to which those markets are integrated; the studies also provide an estimate of savings achieved through LRP. The first study in the table presents the types of questions that should be addressed when undertaking LRP. The remaining studies can be reviewed with these questions in mind. The report proposes a number of questions that should be asked when making decisions about LRP. What kind of supply response will the LRP market have? Can traders supply the demand without price increases on the local market? Is the market integrated with other supply markets so traders have incentive to import additional food into the market? What is the local price relative to the import parity price (IPP)? Should both of these prices include all costs? Do local traders behave competitively? Can traders exercise market power by raising prices so as to extract most of the gains from transfers? What is the likelihood of supply disruptions or delays due to breach of contract, insufficient storage capacity, supplier inability to deliver on contract terms, government interference (such as export bans and currency controls), and logistical bottleneck? Study: Food Procurement in Developing Countries, World Food Program, Executive Board, First Regular Session, Feb. 2006, Rome. Report summarizes other WFP studies on LRP in Bolivia, Burkina Faso, Ethiopia, Nepal, South Africa, Uganda, and Congo. The vast majority of WFP operations is in response to emergencies and has wide fluctuations in needs. WFP’s food purchasing tends to be irregular and unpredictable, which seriously limits its ability to contribute to market development. WFP has not had success in procuring directly from farmers and farmer groups. The case studies indicate that supporting farmers and farmer groups has mixed results and may lead to higher prices paid, higher administrative costs, more contracts, and greater risk of default. In many low-income countries, national market intelligence systems are weak, and reliable and timely data are not available. Often, it is not the cost of the food, but the management costs associated with local procurement in surplus-producing regions where there is little or no market infrastructure that is prohibitive. These management costs include monitoring and supporting the completion of contracts, the costs and risks of contract default, and risk of inadequate food quality. Study: World Food Program Local and Regional Food Procurement-An Analytical Review (Ethiopian Case Study), Final Report. Addis Ababa: June 2005. Year: 2001-2004 Country: Ethiopia Commodity: Corn and wheat In 2003 there would have been a cost savings of $78 per ton on locally purchased wheat and corn. Based on analysis in 1996-2004, 60-70 percent of the markets are integrated. Producers receive 75 percent of retail price in Addis Ababa, leaving a 25 percent retail margin. Ethiopia’s grain marketing system is constrained by lack of access to financial resources, inadequate infrastructures, poor roads, inadequate access to market information, storage facilities, lack of standards and grades, high transfer costs, and nonfulfillment of delivery options. Traders deal with small annual volume and do not hold grain in storage for seasonal arbitrage. Delivery of the product is a real challenge, particularly ensuring the quality of the product delivered. Rejection of delivery for failure to fulfill quality standards is frequent. Since traders do not keep stocks on hand, contract default is a problem when traders are unable to procure the proper amount or quality at the expected price. Default occurs if traders get a better offer; there are problems with traders not honoring their commitments. Transport shortages and tariff increases hinder timely delivery. Study: Local and Regional Food Procurement in Uganda an Analytical Review, A study report prepared for the Economic Analysis and Development Policy Unit in the Strategy, Policy and Program Support Division of the World Food Program, Serunkuuma and Associates Consult, June 2005. Year: 2001-2004 Country: Uganda Commodity: Corn and beans In 2003, WFP spent $12 million less on corn and beans purchased from Uganda than if it had imported these commodities. While imports from the United States and South Africa may cost less at port, added inland transportation costs made them more expensive than LRP. LRP delivered food to beneficiaries in 3 months while international procurement delivery took an average of 6 months. Locally, in 2003, 7 of 8 markets in Uganda appeared integrated. Regionally, Uganda markets were integrated with Tanzania markets but not with Kenya markets. WFP contracts require higher quality than locally traded corn—high moisture content and is subject to rot. Poor post-harvest practices, storage facilities, and equipment such as dryers and shellers affect quality of final product leading to high post-harvest losses and increased costs to clean the grain. Intensification of local purchase contributed to reduction in corn exports. Supplies still come from a small number of companies or farmer groups, suggesting high concentration and potential for monopolistic behavior. Lack of sufficient storage capacity and access to bank loans without WFP contracts are constraints to smaller traders. High cost of borrowing and unavailability of long-term finance are additional constraints. Many traders enter into contracts with WFP before they have stock, putting them at a higher risk for contract default. This also adds pressure on markets because large quantities are purchased in a short period of time, which may lead to drastic price changes. Study: Food Aid Procurement in South Africa: an Analytical Review of WFP Activities; Nick Vink, Thulasizwe Mkhabela, Ferdie Meyer, and Johann Kirsten; April 2005. Year: 2001-2004 Country: South Africa Commodity: Corn In June 2003, farmers received 53 percent of the retail value of corn meal. During the period of analysis, WFP’s unit price for maize was above South Africa’s average prices. This difference in price may be due to the transportation differential, contract delivery terms, and the exchange rate. Traders charge a $5-$10 risk premium to account for the time that elapses from submitting a tender to receiving an award. While WFP has been active in buying corn in the South African market, WFP purchases represent a very small portion of the market—1/5 to 3/4 of a percentage point of the gross value of South African agricultural production. South Africa has a functioning futures commodity market called the South African Futures exchange (SAFEX), which was established after deregulation when the corn board was abolished. Purchase prices are determined by comparing SAFEX prices to the IPP, which is the representative price for purchases on the world market. Study: Democratic Republic of Congo Food Procurement Assessment Mission Euateur, Katanga, Orientale, North Kivu and South Kivu Provinces; World Food Program; May 2007. Years: 2001-2006 Country: Democratic Republic of Congo Commodity: Corn and pulses There is no continuity in WFP purchases; quantities vary significantly from year to year. There are zones of food insecurity alongside zones considered food-secure. Factors hampering production and purchases include the following: Poor road and rail infrastructure; Excessive official and unofficial (illegal) duties and taxes; Disruptions to the trading system, which are often/mainly political, such as Lack of permanent market buyers; Lack of storage, drying, cleaning, milling, and bagging facilities; Lack of access to seeds and fertilizers; Lack of substantial storage or stocks available; Limited facilities for cleaning, drying, and milling; Quality problems, including moisture content, infestation, and losses due to poor storage. Study: Impact of WFP’s Local and Regional Food Purchases (A Study Case on Burkina Faso) Final Report Submitted by Institut de Sahel Comite’ Permanent Inter-Etats de Lutte Contre La Secheresse dans le Sahel, Mali. Year: 2002-2005 Country: Burkina Faso Commodity: Corn, corn meal, sorghum, and cowpea During the period it took an average of 34 days between the invitation for tender In 2004, prices paid for corn by WFP were lower than the IPP in 6 of 7 LRP operations. The price differential ranged from 43 to 72 percent of prices paid by WFP. and the signing and implementation of the contract. Suppliers stated that there were delays in WFP payments. WFP purchases did not change the level of integration between markets. Market participants indicated that WFP purchases resulted in price increases of between 5 to 10 percent. Many organizations intervene in local markets unexpectedly and without prior consultation, simultaneously purchasing large quantities. Such activity contributes to price increases and should be harmonized. WFP contracts were concentrated to a limited number of suppliers. There were 15 suppliers, and 3 of them received more than half the payments made. Other organizations enter the market with food aid purchases, contributing to price increases. Donors should coordinate. Study: Local and Regional Food Aid Procurement: An Assessment of Experience in Africa and Elements of Good Donor Practice, David Tschirley, and Anne Marie Del Castillo, Michigan State University International Development Working Paper No. 91, 2007. Year: 2001-2005 Country: Kenya, Uganda, Zambia and Mozambique Commodity: Corn and corn/soy blend The report cites an analysis by Clay, Riley, and Urey to compare estimated costs of food aid from the United States with LRP in three countries. LRP was 66 percent less expensive than in-kind donations for all commodities. LRP cost 61 percent less for corn and 52 percent less for corn soy blend. Compared local prices to import parity prices, 2001-2005. Local purchase saved the United States nearly $68 million. These savings would allow 75 percent more food aid to be provided. The results were mixed. WFP paid a 10 percent premium in Kenya from 2001 to 2005, an 18 percent premium in Uganda from 2001 to 2004, and the local market price from 2000 to 2005. In Zambia, WFP paid the local price over the period. Some evidence shows that LRP contributed to price surges in Uganda in 2003 and Niger and Ethiopia in 2005 to 2006. Contract default is a major risk of LRP. Limited pool of qualified traders with certified financial capacity, access to physical infrastructure, and trading experience. Most sales remain concentrated in a very small number of trading companies and larger farmers. WFP instituted a program of direct procurement from small farmers. Assessments suggest that this approach is expensive, time-consuming, and unreliable, and has little developmental impact. Food quality is a risk of LRP. In Kenya, at least two documented cases of aflatoxin poisoning from infected corn resulted in dozens of deaths. Study: The United States’ International Food Assistance Programs: Issues and Options for the 2007 Farm Bill, Christopher B. Barrett, February 2007. Year: 2007 Country: United States Commodity: Not applicable; general discussion of U.S. food aid On average, LRP is 66 percent cheaper across all commodities than direct purchase. 36 percent of U.S. food aid shipments to Ethiopia, Kenya, and Tanzania from 1998 to 2002 cost less than comparable local market purchases. The study mentions timeliness of LRP versus direct shipment. Local and regional purchases are not always simple, available, or effective everywhere. Some markets are too thin to absorb a significant increase in commercial food demand without driving up prices. Quality control, transport capacity, and trader market power limit donors’ procurement options. Even taking freight and administrative costs into account, it is sometimes cheaper to import food aid from the United States. Legislative restrictions on food aid program result in added costs, delayed deliveries, and reduced cultural appropriateness of commodities. These costs are attributable to restrictions placed on food aid with respect to shipping, bagging, and processing. These restrictions include the tying of food aid to domestic procurement of commodities, minimum volumes, minimum nonemergency volumes, value- added minimum, bagging minimum, cargo preference, restrictions on use of ports, monetization requirements, and overhead reimbursement for operational agencies. Study: The Development Effectiveness of Food Aid: Does Tying Matter? Organization for Economic Cooperation and Development, 2006 Year: 2002-2003 Country: Various donating and recipient countries Commodity: Wheat, corn, cornsoy blend, vegetable oil, and rice Analysis of food aid transactions by a representative group of 16 donors and 15 selected recipient countries. The study looked at resource transfer efficiency (RTE) by comparing the cost of direct aid transfers with the hypothetical cost of an alternative commercial transaction (ACT). The actual cost of direct transfers was on average 50 percent more than local food purchases and 33 percent more than food procured in third countries. The range of difference in costs varies widely among donors, commodities, mode of transport and destinations—from 10 percent below to 55 percent higher than the cost of alternative commercial imports. While LRP generally cost the least, its cost-effectiveness varied widely. LRP in Africa— Ethiopia, Malawi, Zambia, and Kenya—appeared to cost the least. LRP in India, Jordan and Mauritania cost more than LRP in Africa. The highest costs for LPR were in Haiti. Comparison includes international transport costs to the same destination, The ACT equates to the import parity price (IPP); therefore, local purchase would not be efficient if the overall cost exceeded the IPP. Therefore it would be expected that LRP costs would be less than IPP or most cost-efficient. LRP is the least-cost alternative. For purpose of study treated all direct transfers of food aid as de facto “tied.” overland transport cost to the point of border entry for land locked countries. Comparison does not include internal transport from port or borders to point of distribution, handling, and/or internal storage. Calculations do not account for transaction costs of organizing and importing food products. The following are GAO’s comments on the U.S. Agency for International Development letter dated May 15, 2009. 1. Our recommendation to systematically collect evidence on LRP’s adherence to quality standards and product specifications does not preclude such collaboration as part of efforts, consistent with our 2007 recommendation, to develop a coordinated interagency mechanism to update food aid specifications and products to improve food quality and nutritional standards. We agree with USAID that including actions to collect evidence on LRP’s adherence to quality will make ongoing efforts to improve food quality more efficient. 2. In providing information on agencies’ interpretations of cargo preference requirements as they pertain to LRP, we sought to identify areas of ambiguity where agencies disagree on the applicability of these requirements. We did not attempt to adjudicate the differences in interpretation among the agencies involved. However, in technical comments to a draft of this report, DOT changed its position regarding thresholds and now concurs with USAID’s interpretation, thus eliminating this issue as an area of ambiguity. This is reflected in the final report. 3. See comment 2. 4. We modified text to reflect USAID’S agreement with DOT’s definition of vessel type. The following are GAO’s comments on the U.S. Department of Agriculture’s letter dated May 15, 2009. 1. To obtain an overall picture of costs, we worked to ensure that we had the largest number of procurement transactions, over the longest possible time period for which we had data so some aggregation was required. We acknowledge the variations in the cost differentials in figure 4 that provides the range of differences between USAID and WFP local procurement in sub-Saharan Africa. Our analysis demonstrated consistent results across 8 years of data. For example, 95 percent of local purchases in sub-Saharan Africa cost less than USAID commodities to the same country procured at around the same time. We did not differentiate the prepositioned commodities in the cost comparison, but they were included in our data. However, they represented a small part of U.S. food aid during the period of time that we examined. 2. The issue of quality is one that many WFP procurement officers and others we interviewed identified as a challenge for LRP. However, the lack of systematically collected data makes it difficult to objectively analyze how LRPs adhere to quality standards and product specifications and whether LRP differs in quality from U.S. commodities. Our first recommendation addresses the issue of quality, which would also include improving nutritional standards. 3. We added information to clarify MARAD’s role as the determining agency of “fair and reasonable rates” but note that DOT interprets its role as the sole agency responsible for determining U.S.-flag availability. 4. While we recognize that there is no widespread evidence of LRP causing adverse impacts in markets, we believe that there is a preponderance of information to show that in many developing countries there is a lack of reliable market information. Widespread evidence of any impacts, adverse or otherwise, will not become available in many countries until market intelligence systems are made more reliable and widely used. Therefore, it is important to focus on the potential risk for adverse impacts on markets in areas where LRP is practiced. The following are GAO’s comments on the U.S. Department of Transportation’s letter received May 22, 2009. 1. Although further analysis of LRP practices would be useful, we believe that the results of our analysis demonstrate consistent results over 8 years of data, with 95 percent of local purchases in sub-Saharan Africa costing less than USAID commodities to the same country around the same time. Although we did not differentiate prepositioned commodities in our cost comparison, they were included in our data analysis. However, it is important to note that prepositioned commodities were a very small part of U.S. food aid during this time period. Nonetheless, we recognize that prepositioning can affect delivery time, which was the case in the Tajikistan example where prepositioned food from Jacintoport shortened delivery time. Additionally, DOT questioned an illustrative example we used in this report on potential cost savings in purchasing wheat in Ethiopia because it believed the country had severe shortage in 2002. Although there may be limited capacity for local procurement, disasters are often localized, and there may be surplus regions within the country or in nearby countries. This is precisely a rationale for LRP. In fact, WFP purchased 74,000 metric tons of wheat in Ethiopia locally in the last quarter of 2002, and the average price was lower than wheat procured from the United States. 2. Ocean shipping is one of the many stages in food aid procurement and delivery. While DOT found that the ocean transit time from a prepositioning site averaged only 24.5 days, trans-Atlantic shipping, which account for majority of U.S. food aid to sub-Saharan Africa takes longer. Therefore, the ocean transit time DOT provided in its letter does not represent the typical U.S. food aid delivery time. In addition, other stages of food procurement and delivery add time to the entire process. In order to do a fair comparison of delivery time among various procurement types and to ensure comparability in the procurement and delivery stages, we identified countries that had received significant amount of LRP and international food aid from the WFP. Although the breakdown of the different elements in the delivery time might be useful (which we could not do from the data provided to us by WFP), it does not change our finding that LRP to these countries took less time than international food aid. 3. Although DOT does implement cargo preference statutes through regulation, the regulations often contain ambiguities that have required resolution through a MOU. Our report describes new ambiguities that could arise in applying cargo preference in the context of regional procurement. We believe that these ambiguities need to be resolved— and can be resolved—by updating the MOU. Further, there is no requirement that establishing regulation precede an MOU nor does a MOU preclude the issuance of new regulation. The updated MOU, establishing consensus among the relevant agencies, could be reflected in any future regulation that DOT may draft and get finalized through the rule-making process. The following are GAO’s comments on the World Food Program’s letter dated May 15, 2009. 1. The issue of quality is one area that many WFP procurement officers we spoke with mentioned as a challenge in local and regional procurement. In addition, quality is an area of concern expressed by organizations such as the U.S. Wheat Associates. However, the lack of systematically collected data makes it difficult to objectively analyze how LRPs adhere to quality standards and product specifications. Our first recommendation addresses this issue. 2. In our report, we explain several of the efforts that WFP and others have taken to significantly improve the availability and reliability of market intelligence in developing countries. Yet, as WFP’s own documents state, in many low-income countries national market intelligence systems are weak and unreliable, and timely data are not always available, which may limit the effectiveness of WFP’s market intelligence efforts. 3. We modified text, adding language in the report to explain that the use of import parity prices to determine when to switch from local procurement to regional or international procurement may be constrained. Specifically, in some countries, commodity prices may be so much lower than import parity prices that it would take substantial price increases to reach the import parity price threshold. 4. We recognize that WFP’s market position in many countries is very small (less than 1 percent in Burkina Faso, for example) and we state that in the report, noting that this limits the effects that LRP can have on prices. Also, recognizing that it is difficult to demonstrate an absolutely causal relationship between a discrete WFP local purchase and a discrete price increase, we note that LRP, when combined with other market interventions, unreliable market intelligence, poorly functioning and unintegrated markets, and other factors, has the potential to cause price hikes and reduce consumers’ access to food. Therefore, we recommend improving the reliability and utility of market intelligence in order to guard against the risks associated with a lack of reliable market information. In addition to the person named above, Phillip Thomas (Assistant Director), Sada Aksartova, Kathryn Bernet, Carol Bray, Ming Chen, Debbie Chung, Lynn Cothern, Martin De Alteriis, Mark Dowling, Etana Finkler, Katrina Greaves, Kendall Helm, Joy Labez, Andrea Miller, Julia A. Roberts, Jerry Sandau, and David Schneider made key contributions to this report. International Food Security: Insufficient Efforts by Host Governments and Donors Threaten Progress to Halve Hunger in Sub-Saharan Africa by 2015. GAO-08-680. Washington, D.C.: May 29, 2008. Foreign Assistance: Various Challenges Limit the Efficiency and Effectiveness of U.S. Food Aid. GAO-07-905T. Washington, D.C.: May 24, 2007. Foreign Assistance: Various Challenges Impede the Efficiency and Effectiveness of U.S. Food Aid. GAO-07-560. Washington, D.C.: Apr. 13, 2007. Foreign Assistance: U.S. Agencies Face Challenges to Improving the Efficiency and Effectiveness of Food Aid. GAO-07-616T. Washington, D.C.: Mar. 21, 2007. Darfur Crisis: Progress in Aid and Peace Monitoring Threatened by Ongoing Violence and Operational Challenges. GAO-07-9. Washington, D.C.: Nov. 9, 2006. Foreign Assistance: Lack of Strategic Focus and Obstacles to Agricultural Recovery Threaten Afghanistan’s Stability. GAO-03-607. Washington, D.C.: June 30, 2003. Foreign Assistance: Sustained Efforts Needed to Help Southern Africa Recover from Food Crisis. GAO-03-644. Washington, D.C.: June 25, 2003. Food Aid: Experience of U.S. Programs Suggest Opportunities for Improvement. GAO-02-801T. Washington, D.C.: June 4, 2002. Foreign Assistance: Global Food for Education Initiative Faces Challenges for Successful Implementation. GAO-02-328. Washington, D.C.: Feb. 28, 2002. Foreign Assistance: U.S. Food Aid Program to Russia Had Weak Internal Controls. GAO/NSIAD/AIMD-00-329. Washington, D.C.: Sept. 29, 2000. Foreign Assistance: U.S. Bilateral Food Assistance to North Korea Had Mixed Results. GAO/NSIAD-00-175. Washington, D.C.: June 15, 2000. Foreign Assistance: Donation of U.S. Planting Seed to Russia in 1999 Had Weaknesses. GAO/NSIAD-00-91. Washington, D.C.: Mar. 9, 2000. Foreign Assistance: North Korea Restricts Food Aid Monitoring. GAO/NSIAD-00-35. Washington, D.C.: Oct. 8, 1999. Food Security: Factors That Could Affect Progress toward Meeting World Food Summit Goals. GAO/NSIAD-99-15. Washington, D.C.: Mar. 22, 1999. Food Security: Preparations for the 1996 World Food Summit. GAO/NSIAD-97-44. Washington, D.C.: Nov. 7, 1996. | While the U.S. approach of providing in-kind food aid has assisted millions of hungry people for more than 50 years, in 2007 GAO reported limitations to its efficiency and effectiveness. To improve U.S. food assistance, Congress has authorized some funding for local and regional procurement (LRP)--donors' purchase of food aid in countries affected by food crises or in a country within the same region. Through analysis of agency documents, interviews with agency officials, experts, and practitioners, and fieldwork in four African countries, this requested report examines (1) LRP's impact on the efficiency of food aid delivery; (2) its impact on economies where food is procured; and (3) U.S. legal requirements that could affect agencies' use of LRP. LRP offers donors a tool to reduce food aid costs and delivery time, but multiple challenges to ensuring cost-savings and timely delivery exist. GAO found that local procurement in sub-Saharan Africa cost about 34 percent less than similar in-kind food aid purchased and shipped from the United States to the same countries between 2001 and 2008. However, LRP does not always offer cost-savings potential. GAO found that LRP in Latin America is comparable in cost to U.S. in-kind food aid. According to World Food Program (WFP) data, from 2004 to 2008, in-kind international food aid delivery to 10 sub-Saharan African countries took an average of 147 days, while local procurement only took about 35 days and regional about 41 days. Donors face challenges with LRP, including (1) insufficient logistics capacity that can contribute to delays in delivery, (2) donor funding restrictions, and (3) weak legal systems that can limit buyers' ability to enforce contracts. Although LRP may have the added benefit of providing food that may be more culturally appropriate to recipients, evidence has yet to be systematically collected on LRP's adherence to quality standards and product specifications, which ensure food safety and nutritional content. LRP has the potential to make food more costly to consumers in areas where food is procured by increasing demand and driving up prices, but steps can be taken to reduce these risks. As GAO's review of WFP market analyses and interviews with WFP procurement officers confirmed, a lack of accurate market intelligence, such as production levels, makes it difficult to determine the extent to which LRP can be scaled up without causing adverse market impacts. Although LRP does have the potential to support local economies, for example by raising farmers' incomes, data to demonstrate that these benefits are sustainable in the long term are lacking. U.S. legal requirements to procure U.S.-grown agricultural commodities for food aid and to transport up to 75 percent of those commodities on U.S.-flag vessels may constrain agencies' use of LRP. Although Congress has appropriated funding for some LRP, agencies disagree on the applicability of certain cargo preference provisions to LRP food aid that may require ocean shipping. The 1987 interagency MOU that governs the administration of cargo preference requirements and could clarify areas of disagreement among the agencies is outdated and does not address the issues arising from LRP. |
The demands on judges’ time are largely a function of both the number and complexity of the cases on their dockets. To measure the case- related workload of district court judges, the Judicial Conference has adopted weighted case filings. The purpose of the district court case weights was to create a measure of the average judge time that a specific number and mix of cases filed in a district court would require. Importantly, the weights were designed to be descriptive, not prescriptive—that is, the weights were designed to develop a measure of the national average amount of time that judges actually spent on specific cases, not to develop a measure of how much time judges should spend on various types of cases. Moreover, the weights were designed to measure only case-related workload. Judges have noncase-related duties and responsibilities, such as administrative tasks, that are not reflected in the case weights. With few exceptions, such as cases that are remanded to a district court from the court of appeals, each civil or criminal case filed in a district court is assigned a case weight. For example, in the 2004 case weights—which are still in use—drug possession cases are weighted at 0.86, while civil copyright and trademark cases are weighted at 2.12. The total annual weighted filings for a district are determined by summing the case weight associated with all the cases filed in the district during the year. A weighted case filing per authorized judgeship is the total annual weighted filings divided by the total number of authorized judgeships. The Judicial Conference uses weighted filings of 430 or more per authorized judgeship as an indication that a district may need additional judgeships. Thus, for example, a district with 460 weighted filings per authorized judgeship, including newly requested judgeships, could be considered for an additional judgeship. However, the Judicial Conference does not consider a district for additional judgeships, regardless of its weighted case filings, if the district does not request any additional judgeships. In our 2003 report, we found the district court case weights approved in 1993 to be a reasonably accurate measure of the average time demands a specific number and mix of cases filed in a district court could be expected to place on the district judges in that court. The methodology used to develop the weights used a valid sampling procedure, developed weights based on actual case-related time recorded by judges from case filings to disposition, and included a measure (standard errors) of the statistical confidence in the final weight for each weighted case type. Without such a measure, it is not possible to objectively assess the accuracy of the final case weights. At the time of our 2003 report, the Subcommittee on Judicial Statistics of the Judicial Conference’s Judicial Resources Committee had approved the research design for revising the 1993 case weights, with a goal of having new weights submitted to the Judicial Resources Committee for review in the summer of 2004. The design for the new case weights relied on three sources of data for specific types of cases: (1) data from automated databases identifying the docketed events associated with the cases; (2) data from automated sources on the time associated with courtroom events for cases, such as trials or hearings; and (3) consensus of estimated time data from structured, guided discussion among experienced judges on the time associated with noncourtroom events for cases, such as reading briefs or writing opinions. As we reported in 2009, according to FJC, the subcommittee wanted a study that could produce case weights in a relatively short period of time without imposing a substantial record-keeping burden on district judges. FJC staff provided the subcommittee with information about various approaches to case weighting, and the subcommittee chose an event- based method—that is, a method that used data on the number and types of events, such as trials and other evidentiary hearings, in a case. The design did not involve the type of time study that was used to develop the 1993 case weights. Although the proposed methodology appeared to offer the benefit of reduced judicial burden (no time study data collection), potential cost savings, and reduced calendar time to develop the new weights, we had two areas of concern—the challenge of obtaining reliable, comparable data from two different data systems for the analysis and the limited collection of actual data on the time judges spend on cases. First, the design assumed that judicial time spent on a given case could be accurately estimated by viewing the case as a set of individual tasks or events in the case. Information about event frequencies and, where available, time spent on the events would be extracted from existing administrative databases and reports and used to develop estimates of the judge time spent on different types of cases. For event data, the research design proposed using data from two databases (one of which was new in 2003 and had not been implemented in all district courts) that would have to be integrated to obtain and analyze the event data. FJC proposed creating a technical advisory group to address this issue. In August 2013, FJC officials told us that the process of integrating the two data systems, though labor-intensive, was successful and resulted in accurate data. However, we have not reviewed the integration process for the two data systems, so we cannot determine the effectiveness of this process or whether accurate data resulted. Second, we reported that the research design did not require judges to record time spent on individual cases, as was done for the 1993 case weights. Actual time data would be limited to that available from existing databases and reports on the time associated with certain courtroom events and proceedings for different types of cases. However, a majority of district judges’ time is spent on case-related work outside the courtroom. The time required for noncourtroom events—and some courtroom events that did not have actual time data available—would be derived from structured, guided discussion of groups of 8 to 13 experienced district court judges in each of the 12 regional circuits (about 100 judges in all). The judges would develop estimates of the time required for different events in different types of cases within each circuit using FJC-developed “default values” as the reference point for developing their estimates. These default values would be based in part on the existing case weights and, in part, on other types of analyses. Following the meetings of the judges in each circuit, a national group of 24 judges (2 from each circuit) would consider the data from the 12 circuit groups and develop the new weights. The accuracy of judges’ time estimates is dependent upon the experience and knowledge of the participating judges and the accuracy and reliability of the judges’ recall about the average time required for different events in different types of cases—about 150 if all the case types in the 1993 case weights were used. In 2003, we found that these consensus data could not have been used to calculate statistical measures of the accuracy of the resulting case weights. Thus, we concluded that the planned methodology did not make it possible to objectively, statistically assess how accurate the new case weights are—weights whose accuracy the Judicial Conference relies upon in assessing judgeship needs. In August 2013, AOUSC officials stated that, since 2005, for purposes of determining the need for an additional authorized judgeship, a district’s weighted case filings per authorized judgeship is calculated by including the potential additional judgeship. For example, if a district had total weighted filings of 4,600 and 9 authorized judgeships, and it planned to request 1 additional judgeship, its weighted filings per authorized judgeship, for purposes of the judgeship request process, would be 460. Without including the potential additional judgeship in the calculation, the weighted case filings would be about 511. AOUSC officials stated in August 2013 that the judiciary adopted the proposed methodology in 2004 and does not have plans to update the 2004 district court case weights. In 2003, we found that the principal quantitative measure the Judicial Conference used to assess the need for additional courts of appeals judgeships was adjusted case filings. The measure is based on data available from standard statistical reports for the courts of appeals. The adjusted filings workload measure is not based on any empirical data regarding the time that different types of cases required of appellate judges. The Judicial Conference’s policy is that courts of appeals with adjusted case filings of 500 or more per 3-judge panel may be considered for 1 or more additional judgeships. Courts of appeals generally decide cases using constantly rotating 3-judge panels. Thus, if a court had 12 authorized judgeships, those judges could be assigned to four panels of 3 judges each. In assessing judgeship needs for the courts of appeals, the conference may also consider factors other than adjusted filings, such as the geography of the circuit or the median time from case filings to disposition. Essentially, the adjusted case filings workload measure counts all case filings equally, with two exceptions. First, cases refiled and approved for reinstatement are excluded from total case filings. Second, pro se cases are weighted at 0.33, or one-third as much as other cases, which are weighted at 1.0. For example, a court with 600 total pro se case filings in a year would be credited with 198 adjusted pro se case filings (600 x 0.33). Thus, a court of appeals with 1,600 filings (excluding reinstatements)—600 pro se cases and 1,000 non-pro se cases—would be credited with 1,198 adjusted case filings (198 discounted pro se cases plus 1,000 non-pro se cases). If this court had 6 judges (allowing two panels of 3 judges each), it would have 599 adjusted case filings per 3- judge panel, and, thus, under Judicial Conference policy, could be considered for an additional judgeship. The current court of appeals workload measure, which, AOUSC officials stated, was adopted in 1996, represents an effort to improve the previous measure. In our 1993 report on judgeship needs assessment, we found that the restraint of individual courts of appeals, not the workload standards, seemed to have determined the actual number of appellate judgeships the Judicial Conference requested. At the time the current measure was developed and approved, using the new benchmark of 500 adjusted case filings resulted in judgeship numbers that closely approximated the judgeship needs of the majority of the courts of appeals, as the judges of each court perceived them. The current courts of appeals case-related workload measure principally reflects a policy decision using historical data on filings and terminations. It is not based on empirical data regarding the judge time that different types of cases may require. On the basis of the documentation we reviewed for our 2003 report, we determined that there was no empirical basis for assessing the potential accuracy of adjusted case filings as a measure of case-related judge workload. In our 2003 report, we recommended that the Judicial Conference of the United States update the district court case weights using a methodology that supports an objective, statistically reliable means of calculating the accuracy of the resulting weights, and develop a methodology for measuring the case-related workload of courts of appeals judges that supports an objective, statistically reliable means of calculating the accuracy of the resulting workload measures and that addresses the special case characteristics of the Court of Appeals for the D.C. Circuit. Neither of these recommendations has been implemented, and in August 2013, AOUSC officials stated that the judiciary does not have plans to update the 2004 district court case weights or the 1996 court of appeals adjusted filings weights. With regard to our 2003 recommendation for updating the district court case weights, we reported that FJC agreed that the method used to develop the new case weights would not permit the calculation of standard errors, but that other methods could be used to assess the integrity of the resulting case weight system. In response, we noted that the Delphi technique to be used for developing out-of-court time estimates was most appropriate when more precise analytical techniques were not feasible and the issue could benefit from subjective judgments on a collective basis. More precise techniques were available for developing the new case weights and were to be used for developing new bankruptcy court case weights. In our 2003 report, we also concluded that the methodology the Judicial Conference decided to begin in June 2002 for the revision of the bankruptcy case weights offered an approach that could be usefully adopted for the revision of the district court case weights. The bankruptcy court methodology used a two-phased approach. First, new case weights were to be developed based on time data recorded by bankruptcy judges for a period of weeks—a methodology very similar to that used to develop the bankruptcy case weights that existed in 2003 at the time of our report. The accuracy of the new case weights could be assessed using standard errors. The second part represents experimental research to determine if it is possible to make future revisions of the weights without conducting a time study. The data from the time study could be used to validate the feasibility of this approach. If the research determined that this was possible, the case weights could be updated more frequently with less cost than required by a time study. We concluded in 2003 that that approach could provide (1) more accurate weighted case filings than the design developed and used for the development of the 2004 district court case weights, and (2) a sounder method of developing and testing the accuracy of case weights that were developed without a time study. However, we have not reviewed the effectiveness of this methodology or confirmed whether the judiciary implemented this approach. With regard to our recommendation on improving the case-related workload measure for the courts of appeals, the Chair of the Committee on Judicial Resources commented that the workload of the courts of appeals entails important factors that have defied measurement, including significant differences in case-processing techniques. We recognized that there were significant methodological challenges in developing a more precise workload measure for the courts of appeals. However, we stated that using the data available, neither we nor the Judicial Conference could have assessed the accuracy of adjusted case filings as a measure of the case-related workload of courts of appeals judges. The Ranking Member of the Subcommittee on Bankruptcy and the Courts has requested that we conduct a full review of the case-related workload measures for district court and courts of appeals judges, including a follow-up on our 2003 recommendations. Such a review will allow us to evaluate the judiciary’s methodology and efforts over the last 10 years. Mr. Chairman, this concludes my statement for the record. For further information about this statement, please contact David C. Maurer, Director, Homeland Security and Justice Issues, on (202) 512- 9627 or [email protected]. In addition to the contact named above, the following individuals also made major contributions to this testimony: Chris Currie, Acting Director; David P. Alexander, Assistant Director; Brendan Kretzschmar; Jean M. Orland; Rebecca Kuhlmann Taylor; and Janet G. Temko. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The demands on judges' time are largely a function of both the number and complexity of the cases on their dockets. To measure the case-related workload of district court judges, the Judicial Conference has adopted weighted case filings. The purpose of the district court case weights was to create a measure of the average judge time that a specific number and mix of cases filed in a district court would require. Importantly, the weights were designed to be descriptive, not prescriptive--that is, the weights were designed to develop a measure of the national average amount of time that judges actually spent on specific cases, not to develop a measure of how much time judges should spend on various types of cases. Moreover, the weights were designed to measure only case-related workload. Judges have noncase-related duties and responsibilities, such as administrative tasks, that are not reflected in the case weights. With few exceptions, such as cases that are remanded to a district court from the court of appeals, each civil or criminal case filed in a district court is assigned a case weight. For example, in the 2004 case weights--which are still in use--drug possession cases are weighted at 0.86, while civil copyright and trademark cases are weighted at 2.12. The total annual weighted filings for a district are determined by summing the case weight associated with all the cases filed in the district during the year. A weighted case filing per authorized judgeship is the total annual weighted filings divided by the total number of authorized judgeships. The Judicial Conference uses weighted filings of 430 or more per authorized judgeship as an indication that a district may need additional judgeships. Thus, for example, a district with 460 weighted filings per authorized judgeship, including newly requested judgeships, could be considered for an additional judgeship. However, the Judicial Conference does not consider a district for additional judgeships, regardless of its weighted case filings, if the district does not request any additional judgeships. Based on GAO's 2003 report, it was found that the district court case weights approved in 1993 to be a reasonably accurate measure of the average time demands a specific number and mix of cases filed in a district court could be expected to place on the district judges in that court. The methodology used to develop the weights used a valid sampling procedure, developed weights based on actual case-related time recorded by judges from case filings to disposition, and included a measure (standard errors) of the statistical confidence in the final weight for each weighted case type. Without such a measure, it is not possible to objectively assess the accuracy of the final case weights. At the time of GAO's 2003 report, the Subcommittee on Judicial Statistics of the Judicial Conference's Judicial Resources Committee had approved the research design for revising the 1993 case weights, with a goal of having new weights submitted to the Judicial Resources Committee for review in the summer of 2004. The design for the new case weights relied on three sources of data for specific types of cases: (1) data from automated databases identifying the docketed events associated with the cases; (2) data from automated sources on the time associated with courtroom events for cases, such as trials or hearings; and (3) consensus of estimated time data from structured, guided discussion among experienced judges on the time associated with noncourtroom events for cases, such as reading briefs or writing opinions. In addition, GAO found that the principal quantitative measure the Judicial Conference used to assess the need for additional courts of appeals judgeships was adjusted case filings. The measure is based on data available from standard statistical reports for the courts of appeals. The adjusted filings workload measure is not based on any empirical data regarding the time that different types of cases required of appellate judges. The Judicial Conference's policy is that courts of appeals with adjusted case filings of 500 or more per 3-judge panel may be considered for 1 or more additional judgeships. Courts of appeals generally decide cases using constantly rotating 3-judge panels. Thus, if a court had 12 authorized judgeships, those judges could be assigned to four panels of 3 judges each. In assessing judgeship needs for the courts of appeals, the conference may also consider factors other than adjusted filings, such as the geography of the circuit or the median time from case filings to disposition. |
A statutory license, also called a compulsory license, permits the use of copyright-protected material without the express permission of the copyright owner under specific circumstances and provided the licensee meets certain requirements. Three statutory licenses pertaining to the retransmission of broadcast programming are codified in U.S. copyright law. These statutory licenses refer to cable operators, in the Section 111 license, and satellite carriers, in the Sections 119 and 122 licenses. These licenses, as described in figure 1 below, allow cable or satellite operators to retransmit broadcast programming without obtaining permission from the copyright owners of that material. Certain telecommunication companies, such as Verizon and AT&T— which in the past provided telephone service but now offer video services as well—were determined by the U.S. Copyright Office to function as cable operators and allowed to use the statutory licenses. Given this report’s focus on the statutory licenses, we use the term “cable and satellite operators” to refer to all those entities that use the statutory licenses. This term includes telecommunications companies, such as Verizon and AT&T. According to the U.S. Copyright Office, the statutory licenses supported the growth of the cable and satellite industries and facilitated the delivery of local broadcast television stations’ programming on these platforms. The section 111 license was enacted, in part, to reduce the transaction costs that a then nascent cable industry would have faced if cable operators were required to negotiate with every copyright owner whose work was embedded in a local broadcast television station’s signal. The section 119 and 122 licenses were extended to satellite operators to reduce transaction costs and provide the same general efficiencies offered to cable operators under the Section 111 license. Transaction costs are a concern because each television program may contain material from multiple copyright owners. Since a typical day of programming on a local broadcast television station would likely include 20 or more programs, hundreds of copyright owners may have royalty claims on a single day’s worth of programming. Under the statutory licenses, the U.S. Copyright Office collects copyright royalty fees and invests them in government securities until fees are allocated and distributed to copyright owners. Under the Copyright Act, the Copyright Royalty Judges are responsible for determining the distribution of royalties and adjudicating royalty claim disputes. In June 2015, the Copyright Royalty Judges granted a partial distribution of the 2013 cable and satellite royalty funds to the claimant groups listed in table 1. Copyright owners have historically submitted copyright claims through the claimant groups shown in table 1. These groups then allocate their share of the distribution to their group members. Each cable operator’s obligation to carry all stations within its designated market area is dependent upon its total capacity; however, the capacity of most modern cable systems has rendered these distinctions largely meaningless. Except for costs associated with delivering a good quality signal for transmission and increased costs relating to distant signal copyright indemnification. See 47 C.F.R. §76.60. exist. This property right is distinct from the right to perform copyright-protected material embedded in the broadcast signal. Retransmission consent applies only to commercial local broadcast television stations and allows them to grant permission to cable and satellite operators to retransmit their signals, usually in return for a negotiated payment. By opting for retransmission consent, commercial local broadcast television stations give up the guarantee of carriage in exchange for the right to negotiate compensation for carriage of their signal. SNL Kagan, a media research firm, estimates these fees at $6.3 billion in 2015 and projects that they may reach $10.3 billion by 2021. Broadcast Exclusivity Rules: The broadcast exclusivity rules—the syndicated exclusivity rule and the network non-duplication rule—are an administrative mechanism for local broadcast television stations to enforce their exclusive rights obtained through contracts with broadcast networks and syndicators. The syndicated exclusivity rule protects a local broadcast television station’s right to be the exclusive provider of syndicated programming in its market. Similarly, the network non-duplication rule protects a local broadcast television station’s right to be the exclusive provider of network programming in its market. The exclusivity rules—when invoked by local broadcast television stations—require cable and satellite operators to block, in some manner, duplicative content carried by a local broadcast television station in another market (a distant signal) imported into a station’s local market. For example, these rules allow WJZ, the CBS- affiliated local broadcast station in Baltimore, to prohibit cable operators from showing duplicative network content on a CBS- affiliated station from another market (e.g., WUSA, the CBS-affiliated local broadcast station in Washington, D.C.) in the event WUSA was imported into Baltimore. Similarly, the rules would allow WJZ to prohibit cable operators from showing any duplicated syndicated content from any other market’s station that is imported into Baltimore. Video programming, or video content, is the television programs watched by viewers including primetime shows, news, and movies. The flow of this content from development to distribution involves many entities, including those that create the video content, those that aggregate the content into a schedule, and those that distribute the content to viewers (see fig. 2). A single entity in the video marketplace may produce content, aggregate it, and distribute the content. For example, CBS is primarily known as a broadcast network; however, CBS also produces its own video content, through CBS Television Studios. It owns several CBS-affiliated local broadcast television stations. CBS also functions as an online video distributor, such as through its offering of a subscription online streaming service, CBS All Access—which offers live streams of local CBS-affiliated broadcast television stations as well as video content on an on-demand basis. Before video content can be watched by viewers, financial and contractual arrangements between those that produce, aggregate, and distribute video content must be made—through market-based negotiations or under certain circumstances, through the statutory licenses. Market-based negotiations involve private negotiations between copyright owners or licensees and those that want to use their copyrighted content. These negotiations can include a variety of contracted terms covering such items as when the content will be aired, how the content will be promoted, and the price that will be paid for the right to license that content. The price ultimately agreed to for the public performance rights of video content in market-based negotiations depends on a variety of factors, including, the expected ratings and associated advertising revenues for the content. Content producers, such as Paramount Studios and the National Football League, create the video content viewers ultimately will watch. Content producers may license their public performance rights over a variety of distribution platforms that include: Online distribution—video content is streamed online to any Internet- connected device. Requires the licensing of online rights. Primary transmission—the over-the-air broadcast of content by a local broadcast television station. Requires the licensing of primary rights and only applies to content shown by local broadcast television stations. Through-to-viewer distribution—the content on a cable network is distributed by a cable or satellite operator to their subscribers’ television sets. Requires the licensing of through-to-viewer rights. Video-on-demand distribution—cable and satellite operators offer individual programs to their subscribers for free or for a nominal fee to watch at a time of the viewer’s choosing. Requires the licensing of video-on-demand rights. Content producers do not explicitly license secondary transmission rights for broadcast content that is retransmitted by cable and satellite operators to viewers’ television sets. As discussed above, the statutory licenses allow these operators to carry copyrighted programming without licensing those rights from the individual copyright owners of that content. As reported by the U.S. Copyright Office, most types of public performance rights are licensed by private parties through market-based negotiations. Content producers are paid for the public performance rights of their content in several ways. They receive licensing fees from broadcast networks and other “content aggregators” or “content distributors” like online video distributors that license the right to use their content on online platforms. Content producers can also receive licensing fees for selling syndicated programming to local broadcast television stations. Additionally, content producers may receive royalty fees paid by cable and satellite operators to the U.S. Copyright Office through the statutory licenses. Content aggregators—typically broadcast networks, local broadcast television stations, and cable networks, as described in more detail below—are those that purchase the rights to a variety of copyrighted content which they arrange into a schedule for viewers. Content aggregators license the public performance rights for content by paying a fee to the producers of the content. Broadcast networks range from major commercial networks, such as ABC, CBS, FOX, and NBC, to other commercial networks, such as ION Television and Univision. Broadcast networks purchase the rights for programs, arrange programs into a schedule, and then through network-affiliate agreements convey these rights to their affiliates. Broadcast networks receive payments through affiliate fees, as discussed below, and advertising revenues. Commercial local broadcast television stations, network-affiliated and independent stations (e.g., WJLA—an ABC affiliate in Washington, DC, and KUBE—an independent station in Houston, TX) may own the rights to transmit local content (e.g., local news) or license from content producers the rights to transmit syndicated content (e.g., Seinfeld reruns) in their local markets. They receive revenue from selling advertising spots and from cable and satellite operators via retransmission consent fees, if elected. Network-affiliated local broadcast television stations pay affiliate fees to broadcast networks in exchange for the right to air broadcast network content. Cable networks (e.g., ESPN and HBO), similar to broadcast networks, can obtain video content for their networks from content producers through market-based negotiations. However, instead of licensing their network content to an affiliate, cable networks license their signal to a cable or satellite operator for transmission. They receive money through license fees, advertising revenues, and in some cases, subscription fees directly from viewers. Content distributors are those entities that distribute video content to households. Local broadcast television stations, cable and satellite operators, and online video distributors all distribute video content. Commercial local broadcast television stations—like KDFW, the FOX affiliate in Dallas, TX—transmit their signals over-the-air for free and are accessible by most households via antenna. Stations are not paid by viewers for the broadcast of their signals over-the-air. As discussed above, however, commercial local broadcast television stations can receive retransmission consent fees from cable and satellite operators for the secondary transmission of their signals as well as advertising revenue. Cable and satellite operators, such as Time Warner Cable and DISH Network, distribute video content, including the signals of local broadcast television stations and cable networks, to viewers for a subscription fee. Cable and satellite operators obtain the right to retransmit local broadcast television station’s signals either through a local station’s assertion of must-carry or carry-one, carry-all or through retransmission consent negotiations. As discussed above, cable and satellite operators that rely on the statutory licenses do not have to obtain the rights to retransmit the content embedded in a local broadcast television station’s signal. An estimated average of 14 local broadcast television stations are carried in cable operators’ channel lineups with an estimated average of around 10 local broadcast television stations carried under the must-carry requirement. Cable and satellite operators receive revenue through subscription fees. They also receive revenue for advertising spots. Online video distributors (OVDs) provide video content to consumers through several business models, including on a subscription (e.g., Netflix) or an advertising-supported basis (e.g., Go90) through internet connections that can be provided by cable and satellite operators. The video content includes programs available on-demand and, in some cases, as a live stream of a local broadcast television station or cable network with the same schedule of shows and aired at the same time as is offered over-the-air or through a cable or satellite operator. They receive payments through subscription fees or advertising revenue—depending on their business model. Consumers have several options through which to access video content. Consumers may watch local broadcast television stations on their television set using an antenna or they may watch local broadcast television stations and cable networks on their television set through a cable or satellite service subscription. Using an internet connection, consumers can watch video content on, for example, tablets, smartphones, and other mobile devices, through an existing subscription with a cable or satellite operator or using an OVD service. The video marketplace also includes noncommercial educational broadcast television stations, such as public television stations. Similar to commercial local broadcast television stations, these stations are available to viewers over-the-air and through secondary transmission by cable and satellite operators. As discussed earlier, unlike commercial local broadcast television stations, these stations typically cannot request a fee from cable and satellite operators for the retransmission of their signal; instead they request carriage by cable and satellite providers through the must-carry or carry-one, carry-all requirements. In addition, noncommercial educational stations are prohibited from accepting on-air advertisements. How viewers access and consume video content is changing. Over-the-air: According to FCC, the percentage of television households relying on over-the-air reception to watch local broadcast television stations has remained relatively steady. While the number of households that use over-the-air service increased from 11.2 million households in 2013 to 11.4 million households in 2014, reliance on over-the-air service by all U.S. television households remained the same at 9.8 percent. Cable and satellite video services: The percentage of U.S. television households that receive video content through cable and satellite services has declined even as the number of subscribers to these services has increased. Our analysis of industry data indicates that from 2010 to 2014, the total number of U.S. video service subscribers to cable, satellite, and telecommunication companies increased from 99.2 million to 99.6 million. Over that same period of time, cable operators lost subscribers, satellite operators maintained about the same number of subscribers, and telecommunication companies increased their number of subscribers. However, the percentage of U.S. television households that use cable and satellite subscriptions to receive video services declined from 85.4 percent to 83.9 percent over the 5 year period. Online video services: As of 2013, more than 53 million U.S. households watched video content online with at least one Internet- connected device. Thirty-nine of the 42 industry stakeholders we interviewed identified the growth of OVDs and increasing usage of online video services as a trend in the video marketplace over the last 5 years. SNL Kagan estimates that 4.9 percent of occupied U.S. households watched television programs or movies through OVDs without receiving cable and satellite service in 2013, compared to 3.9 percent in 2012. Our analysis of FCC reports, industry reports, and stakeholder opinions identified several factors influencing the changes in the video marketplace and the consumption of video content, including: Programming costs: The rising cost of video content impacts how much each entity—from content aggregators (e.g., CBS and ABC) to content distributors (e.g., Time Warner Cable and Netflix) to viewers— involved in the flow of video content must pay to obtain the rights to distribute or view video content. SNL Kagan data show that cable and satellite operators’ programming expenses are increasing at a greater rate than their revenues. For example, cable and satellite operators’ programming expenses as a percent of video service revenue increased from 34.6 percent in 2006 to 44.6 percent in 2013. Twenty- one out of the 42 stakeholders we interviewed identified rising programming costs as a trend in the video marketplace over the last 5 years. Affordability and flexibility: Industry reports we reviewed suggest that the emergence of low-cost alternatives to cable and satellite video services could accelerate viewers’ use of online services. Factors influencing this trend include pressures on disposable income for low income cohorts of all ages, the desire to only pay for the content consumers want to watch, and viewers’ preference to watch content when they want, where they want, and on the device of their choosing. According to FCC and consistent with our analysis, the average monthly per-subscriber cost for an expanded basic service cable subscription increased from $54.44 in 2010 to $66.61 in 2014. This suggests that emerging options may provide consumers with greater flexibility to choose a service that more closely meets their desired price and content preferences. Generational viewing habits: According to several industry reports, younger consumers are more likely to be “cord-nevers” or “cord- cutters” than those in other age groups. Subscriptions to cable and satellite services are lowest among households headed by younger adult consumers (e.g., 18 to 29 years old). However, it is unclear whether younger consumers who currently do not subscribe to cable and satellite video services will eventually do so. As reported by FCC, cable and satellite operators have developed a variety of competitive strategies to adapt to the changing marketplace. In response to the video marketplace changes mentioned above— changes in distribution platforms and viewing habits—cable and satellite operators, broadcast networks, and cable networks are increasingly offering video content online, including: TV Everywhere: Video services that allow subscribers to cable and satellite video services to access both linear and video-on-demand content on a variety of in-home and mobile Internet-connected devices. While not yet widespread, according to industry reports the availability and use of TV Everywhere appears to be growing. For example, Comcast’s Xfinity TV Go allows viewers to access video content online, live and on-demand, through their existing cable subscription. Online linear programming: Some cable and satellite operators are also offering subscription-based online video services that allow subscribers to stream television channels over the Internet in real- time. For example, DISH’s SlingTV is a subscription service that allows viewers to access certain television channels live or watch video-on-demand content via the Internet. Broadcast and cable network online video services: Broadcast networks are increasingly offering their content online, with video-on- demand offerings and streaming of their affiliated local broadcast television stations, through network websites and applications that are accessible via Internet-connected devices (i.e., computers, tablets, and smartphones). For example, the CBS network offers CBS All Access, an online video service that provides access to live linear streams of local CBS affiliate broadcast programming over the Internet and video-on-demand content to consumers for a monthly subscription fee. Cable networks also offer similar online services. For example, HBO offers a standalone subscription service, HBO Now, which allows viewers to watch episodes of shows in real time as well as on-demand. Our analysis of recent changes in the video marketplace, as discussed above, indicates that the types of rights licensed have increased as online and video-on-demand distribution platforms have multiplied. Much of this video content—including content aired by local broadcast television stations—is licensed through private market-based negotiations. Figure 3, below, diagrams the parties and types of rights that can be licensed for different distribution platforms (e.g., over-the-air, video-on-demand, and online) for commercial video content. As shown in figure 3, it may be feasible to license the rights for the secondary transmission of broadcast content in a manner similar to the market-based negotiations used for other distribution platforms. The only rights in the video marketplace not licensed through market-based negotiations are the secondary transmission rights for broadcast content, as shown in the first panel of figure 3. All other rights depicted in figure 3 are licensed through market-based negotiations. The growth in online and video-on-demand content provides evidence that suggests the video marketplace can develop a market-based approach to license secondary transmission rights for broadcast content in the absence of the statutory licenses. Of the 42 stakeholders we interviewed, 21 play a role in licensing rights for the over-the-air (primary) transmission of broadcast content or secondary transmission of broadcast signals (e.g., negotiations with content producers for primary transmission rights, network-affiliate agreements, and retransmission consent negotiations). Twenty of these 21 stakeholders told us they also participate in the licensing of online and video-on-demand rights for broadcast content. This suggests that the same parties (e.g., broadcast networks, local broadcast television stations, and cable and satellite operators) that currently rely on the statutory licenses to facilitate the retransmission of broadcast content by cable and satellite operators already use market-based negotiations to license other public performance rights for broadcast content. In addition, 11 out of the 14 of these stakeholders that participate in negotiations for primary transmission rights thought it was feasible to license secondary transmission rights when primary transmission rights are licensed. The emergence of live linear online video services that allow viewers to stream, via Internet-connected devices, the live signal of their local broadcast television station, also illustrates the potential feasibility of licensing all public performance rights related to broadcast content using market-based negotiations. For example, CBS offers a direct-to- consumer online video service, CBS All-Access. In another example, Sony’s PlayStation Vue allows viewers to stream live television channels, including local broadcast television station affiliates of ABC, CBS, FOX, and NBC in select cities. Online video distributors (e.g., Sony Playstation Vue or CBS All Access) must license the online rights for all the content on the local broadcast television station’s signal—including the broadcast network’s content (e.g., primetime dramas and comedies) and the local broadcast television station’s content (e.g., local news programs). The method by which these online rights are licensed varies by broadcast network and by online video distributor; however, the content they must license—without the benefit of the statutory licenses—is similar to the content covered by the statutory licenses. Therefore, it seems feasible that just as broadcast content is licensed for online viewing using market- based negotiations; it can also be licensed for viewing through cable and satellite operators using market-based negotiation and without the statutory licenses. Moreover, over the past 25 years, FCC and the U.S. Copyright Office have reported that the transaction costs that statutory licenses were created to address may have become more manageable and that licensing secondary transmission rights in the absence of the statutory licenses could be feasible. Specifically, in its 1989 statutory licensing study, FCC reported that in the absence of the section 111 license, television stations would be able to acquire cable retransmission rights to “packages” of the programming they broadcast. Cable operators could then negotiate with a single entity—the broadcast station—for carriage rights to each package (e.g., sublicensing). Thus, cable and satellite operators would not have to license the rights to transmit each program with each copyright owner, minimizing the number of negotiations and subsequently the transaction costs. FCC reported that the existence of cable networks provided “convincing” evidence that the transaction costs associated with full copyright liability are manageable and concluded that the “networking mechanism” appeared well-suited to the acquisition of cable retransmission rights for broadcast signals as well. Moreover, the U.S. Copyright Office reported in 2008 and 2011 that sublicensing is a reasonable alternative to statutory licenses. Cable and satellite operators could negotiate for the secondary retransmission rights for broadcast content at the same time and potentially with the same entities they negotiate retransmission consent for the entire broadcast signal. Although feasible for most, some video marketplace participants may face negative effects and may have difficulty licensing secondary transmission rights to ensure distribution of their video content in the event of a phaseout. Specifically: Small Cable Operators: Based on our analysis of stakeholder interviews and the U.S. Copyright Office’s 2011 report, cable operators with small customer bases may face financial and logistical challenges in the event of a phaseout of the Section 111 license. As the U.S. Copyright Office has reported, small cable operators are particularly vulnerable to increases in the costs of doing business. Similarly, some stakeholders told us they believe that small cable operators do not have the financial and legal resources to adapt to a change in the statutory licenses. These stakeholders were concerned that any increase in transaction costs (due to additional negotiations) or an increase in the actual cost of content may make it difficult for small cable operators to stay in business. Public Television: Public television, in particular, may be negatively affected by a phaseout of the statutory licenses. Public television stations and program suppliers, such as the Public Broadcasting Service (PBS), tend to license a smaller bundle of rights at a relatively low cost. In addition, producers of public television content rely on the distribution of statutory license copyright royalty fees to supplement the payments received from public television. According to public television stakeholders, given the terms by which content is currently licensed for public television, neither direct licensing or sublicenses—marketplace alternatives to the statutory licenses— would work. These licensing mechanisms would likely result in cable and satellite operators or public television stations and program suppliers facing significant transaction costs to obtain the necessary secondary transmission rights. As we, along with the U.S. Copyright Office reported in 2011, there are also concerns that some local public television stations may not have the financial resources to deal with the transaction costs associated with obtaining these rights for all of their content. A phaseout of the statutory licenses could have implications for the must- carry and carry-one, carry-all requirements, as currently implemented. Must-carry: As we have previously reported, if Congress phases out the Section 111 statutory license, cable operators may have difficulty complying with the must-carry requirement. Eliminating this statutory license would remove the current mechanism used by cable operators to license broadcast programming and, unless the must-carry provision was at least revised, would leave cable operators in a seemingly paradoxical situation. Cable operators would be required to transmit without modification local broadcast station signals containing copyrighted content for which they might not be able to license the needed public performance rights, or only be able to do so at a potentially significant burden and cost. Carry-one, carry-all: If Congress phases out the Section 122 statutory license, according to FCC and the U.S. Copyright Office, the carry- one, carry-all provision would no longer apply to satellite operators. According to the U.S. Copyright Office, a repeal of the Section 122 license would render carry-one, carry-all effectively “null and void,” which could have a detrimental effect on television stations that do not or cannot elect retransmission consent due to a poor bargaining position. Local broadcast television stations that currently gain carriage by satellite operators through the carry-one, carry-all provision, may no longer be carried in the event of a phaseout of the Section 122 license. In this scenario, cable operators would have a mandatory obligation to carry any local broadcast television station that requested carriage, while satellite operators would face no such obligation. As we have reported in the past, a phaseout of the statutory licenses would not necessarily require modification to the other carriage requirements, specifically retransmission consent and the broadcast exclusivity rules, as discussed below. Retransmission consent: Commercial local broadcast television stations can pursue this option, discussed earlier, when they do not invoke must-carry or carry-one, carry-all. A phaseout of the statutory licenses would not necessarily change how cable and satellite operators negotiate for the rights to transmit commercial local broadcast stations’ signals. However, without the statutory licenses, cable and satellite operators would be required to obtain the rights to retransmit the content embedded on a broadcast station’s signal— the potential mechanisms for obtaining these rights (direct licensing, sublicensing, and collective licensing) are discussed further below. Broadcast exclusivity rules: These include the network non-duplication and syndicated exclusivity rules that, as mentioned earlier, were designed to protect local broadcast stations from competition with local broadcast stations imported by cable or satellite carriers from outside the local market being served. A phaseout of the statutory licenses would not, on its face, require a change to the exclusivity rules. We previously reported that eliminating the exclusivity rules may have varying effects, but these would depend on other federal actions and industry response. Stakeholder support for a phaseout of the statutory licenses varied. Of the 42 selected stakeholders we interviewed, all of whom provided a response to our questions asking their position on a phaseout, 15 supported either a full or partial phaseout of the statutory licenses, and in some cases cited contingent factors: Six of the 42 stakeholders supported a full phaseout of all three statutory licenses. Reasons given for supporting a full phaseout included that direct negotiations and sublicensing to license the rights to distribute broadcast content on online and on-demand platforms are already taking place in the video marketplace. In addition, cable networks aggregate all rights needed for cable and satellite operators to transmit their content to viewers. According to some of these stakeholders, they see no reason the same system could not be employed to address the retransmission of broadcast content by cable and satellite operators. Five of the 42 stakeholders said their support of a full phaseout was contingent on other factors, such as the phaseout or reform of some or all of the carriage requirements. Four of these would support a phaseout of the licenses but only if all carriage requirements were eliminated. One stakeholder overall supported a phaseout of the statutory licenses, but thought the licenses could be retained to help smaller copyright holders. One reason given for wanting either the elimination or amendment of the carriage requirements at the same time as a phaseout was concern that if the requirements remain in effect they could undermine any benefits of a system without the statutory licenses. For example, if a sublicensing system was the primary replacement for the current statutory licenses and cable and satellite operators paid for the right to license the underlying broadcast content rights in addition to current fees for retransmission consent; there were concerns that programming costs would increase significantly. Four of the 42 stakeholders said they supported a partial phaseout of the statutory licenses, and all four specified that the licenses related to distant signal transmission—section 119 and the relevant portions of section 111—as those that should be phased out. These stakeholders said the statutory licenses were functioning as intended, however, they thought the distant signal portions of the licenses had a negative impact on the marketplace. In contrast, 14 of the 42 selected stakeholders said they did not support a phaseout. Five of the 14 specified that they did not support a phaseout of the statutory licenses for stations that elect must-carry or carry-one, carry-all, but did not take a position on what should happen to the statutory licenses for commercial local broadcast television stations that participate in retransmission consent negotiations. This position is in part due to the issues discussed above, such as financial concerns if noncommercial participants had to license a larger bundle of rights through market-based negotiations. Half of these stakeholders (7 of 14 stakeholders) said that the current system is operating as intended. These stakeholders were concerned that changes to the system might unfairly benefit one industry segment over another as well as possibly have unintended consequences that could damage the video marketplace. For example, stakeholders had concerns that sublicensing or direct licensing could add transaction costs and reduce efficiencies gained though the statutory license system, and hurt the video marketplace. Thirteen of the 42 selected stakeholders told us they had no position on a possible phaseout. One of these stakeholders told us he or she did not have a position on a phaseout because this stakeholder has other higher priority regulatory and legislative concerns requiring their attention. Additionally, the five stakeholders representing OVDs are not a part of the statutory licensing system so this issue was not relevant to their role in the video marketplace. Selected stakeholder views on how a phaseout might affect the video marketplace were varied and appeared to be influenced by uncertainty related to the carriage requirements, marketplace alternatives, and competition. Given this uncertainty, generally only about half or fewer of the 42 selected stakeholders interviewed had a position on these issues. Carriage Requirements: Of the 42 selected stakeholders we interviewed, on any given requirement, only about half (a range of 19-22 stakeholders) had a position on what should happen with the carriage requirements if the statutory licenses were phased out. Those without a position who offered a reason, cited not wanting to speculate on the issue, in some cases due to uncertainty about what, if any, federal actions would be taken and what the impacts of those actions on the video marketplace may be. As discussed above, if the statutory licenses are fully or partially phased out, it may be necessary to adjust the must-carry requirement. For example, several stakeholders representing cable operators told us that the must-carry requirement would put cable operators in a difficult position if the statutory licenses were phased out. Specifically, must-carry would require carriage of a station, but cable operators would have no guarantee that the rights for the content on a station’s signal were licensed for secondary transmission. Similarly, public television stakeholders told us that without the statutory licenses, the party responsible for obtaining the secondary transmission rights—either public television stations or program suppliers—would face difficulties due to high transaction costs. Some selected stakeholders also raised concerns about how, if the licenses were phased out, retransmission consent negotiations for the commercial local broadcast television station signal would co-exist with market-based negotiations to license the rights for the broadcast station’s content. Marketplace Alternatives: Similarly, of the 30 selected stakeholders asked, about half (16 out of 30) had no position on which marketplace alternative could replace the current statutory licensing system. Among these stakeholders, some cited uncertainty about if or when a phaseout would occur, as well as not wanting to comment because they do not support a phaseout. The U.S. Copyright Office outlined in 2011 three possible marketplace alternatives—sublicensing, direct licensing, or collective licensing—that could replace the current system of statutory licenses. Additionally, the office outlined three approaches—statutory sunset, distant signal first, and station-by-station—for conducting a phaseout. The marketplace alternative used to license the secondary transmission rights for broadcast content after a phaseout would affect video marketplace negotiations and associated transaction costs. Of the 30 selected stakeholders we asked, about half had a position on a preferred marketplace alternative or how a phaseout of statutory licenses should be conducted. Some stakeholders (5) noted that sublicensing, collective licensing, and direct licensing already occur in the current video marketplace to license video content on cable channels and on most distribution platforms (e.g., over-the-air transmission, online, and on- demand). As discussed above, with market-based negotiations already occurring, a phaseout of the licenses may be feasible. However, the approach used to conduct a phaseout and the timeliness of implementation, could also impact the marketplace. Of the stakeholders with a position, 9 out of 10 said a statutory sunset option may be the best option as it would allow the video marketplace participants time to renegotiate any existing contracts for broadcast programming and signal carriage contracts. For example, one broadcast network noted that contracts related to broadcasting of live events, such as professional sports, often cover a multi-year period, and a statutory sunset might help avoid a disruption in carriage. Competition: Of the 30 selected stakeholders asked, about half (16) had a position on the potential effects of a phaseout on competition in the video marketplace. Of those, only one thought there would be no impact on competition. Five of these stakeholders found it difficult to comment on how competition would change. This was in part due to not knowing how, if at all carriage requirements might change and the marketplace alternatives that might take the place of the statutory licenses. Of the 10 that indicated a possible effect on competition, 5 stakeholders cited a potential increase in competition in the marketplace as a possible result of phasing out the statutory licenses. According to one selected stakeholder, an increase in competition could mean a decrease in programming costs, as new market entrants might spur the creation of more original content. However another stakeholder said that an increase in competition as more participants enter the marketplace could result in higher programming costs. Essentially, more entities would be competing over the same content. Given the uncertainty about the implications of a phaseout on the carriage requirements and the video marketplace discussed above, most stakeholders did not have a position regarding the effect of a phaseout on consumer access to programming and prices paid for cable and satellite television. Of the 42 selected stakeholders we interviewed, about half or less provided responses on a range of possible impacts on consumers’ access and prices. As we have previously reported, a phaseout of the statutory licenses has the potential to result in disruptions of local broadcast television stations’ signals being aired by cable and satellite providers. However, the overall impact on the nature of content available and consumer access is unclear. Consumers might see their access to local broadcast television stations disrupted due to disagreements over the price and terms to retransmit broadcast station’s content, and it is unclear what impact a phaseout might have on the nature and availability of this content. Blackouts: Programming disruptions, often termed blackouts, occur when a cable or satellite operator and a local broadcast television station owner are unable to reach agreement on the carriage of the station’s signal, usually during retransmission consent negotiations. This disagreement results in a broadcast station’s signal not being retransmitted to viewers via the cable or satellite operator, disrupting their access to the content on a local broadcast television station’s signal. Blackouts have increased in recent years, as shown in figure 4. Of the 37 selected stakeholders asked, 17 provided responses to our questions about blackouts. These stakeholders were unsure if blackouts in a post-phaseout marketplace would change because of uncertainty around the status of the carriage requirements or other marketplace changes. Six out of 17 stakeholders thought blackouts would definitely increase in the event of a phaseout of the statutory licenses. These stakeholders said this would occur because of the potential for additional negotiations, both in number and the number of parties involved, which could create more opportunity for holdouts and lack of agreement. The remaining 11 stakeholders were split between either no change (6) or those (5) who thought the impacts on blackouts would vary depending on the marketplace alternative selected and the video marketplace response to it, and therefore was difficult to predict. None of the selected stakeholders thought blackouts would decrease if the statutory licenses were phased out. Programming Diversity: Of the 42 selected stakeholders interviewed, 17 commented on how the diversity of programming, such as the nature and availability of content, might be affected if the statutory licenses were phased out. One of these stakeholders thought there could be an increase in the diversity of programming, as the phaseout of the licenses could spur the development of new content into the marketplace. Three selected stakeholders thought there would be no change to the availability and diversity of programming. Another five stakeholders were unsure of what impact might occur because there were unknown factors related to a phaseout that could affect either the availability or diversity of programming. For example, depending on the marketplace alternative selected to replace the statutory licenses, or if any changes were made to the relevant carriage requirements discussed above, diversity of programming could be affected or stay the same. Seven of the 17 responding stakeholders thought the availability and diversity of content would decrease. Of those that provided additional explanation on their position (6), concerns about less diverse content being available stemmed, in part, from uncertainty over how potential changes to carriage requirements may affect niche programming. For example, if a system has greater transaction costs, availability of programming could decrease. Representatives from one selected industry stakeholder organization said that smaller content producers with less popular programming would have limited leverage in a direct licensing system, in part due to increased transaction costs. Additionally, as discussed above, if in addition to the statutory licenses being phased out, the must-carry requirements were also eliminated, public television and other stations that currently elect must carry could have difficulty licensing the secondary transmission rights and thus might have to alter or diminish their programming. In our interviews with satellite operators, some cable operators, and public television stakeholders, these stakeholders also raised concerns that consumers might face decreased access to programs in the event of a phaseout of the distant signal licenses (portions of Section 111 license and the Section 119 license), although it is not clear the extent of the effect on consumers due to changes in the marketplace. Satellite operators and one cable operator raised the concern that without the distant signal licenses, no marketplace alternative for distant signals would develop. One satellite provider we spoke with said that without the distant signal licenses, no marketplace alternative to provide distant signals may emerge as commercial broadcast networks do not have any financial incentive to allow their affiliates’ signals to be transmitted outside of the intended local market. These stakeholders noted how in some local television markets, known as “short markets”, there is no local broadcast affiliate station for at least one of the major broadcast networks. Without distant signal importation, these markets may not have access to one or more local broadcast television stations affiliated with a major network. However, according to U.S. Copyright Office data, the most widely viewed distant signal by satellite service subscribers in the United States is that of WGN, a superstation. The effects of a partial phaseout of the statutory licenses on the viewers of distant signals may be mitigated by the recent conversion of WGN into a cable network. In Statement of Account filings with the U.S. Copyright Office, the two satellite service providers no longer reported carrying WGN as a superstation during the latter part of the 2015 accounting period and subsequently saw a decrease in the number of their subscribers receiving distant signals. Public television stakeholders raised the concern that a phaseout of the distant signal licenses may actually lead to some local markets not being served by a public television station. These stakeholders noted that in the event a local public television station can no longer provide service in a community, it may be necessary to import a public television station from another market as a distant signal. However, without the distant signal licenses, these public television stakeholders said it would be very difficult to license the needed public performance rights for all the content embedded in the signal of a distant public television station. Of the 37 selected stakeholders asked, 29 responded to our questions about the impacts of a phaseout on consumer prices. Among these stakeholders, there was a general recognition that uncertainty about how the video marketplace would react to a phaseout of the statutory licenses and about what, if any, changes would be made in the regulatory environment made it difficult to speculate on the effects on consumer prices. Thirteen of the 29 who commented on this issue thought consumer prices would increase, in part due to anticipated increases in transactions costs under a new licensing system. In general, an increase in costs to providers of a service may lead to higher consumer prices for the service. If, in the event of a phaseout, the marketplace alternative selected to replace the current statutory license structure entails greater transaction costs, then it is possible that some or all of those costs would be passed onto the consumer. Similarly, if there were no change to stakeholder transaction costs, then there should be no change to consumer prices. This outcome is consistent with our prior work, the effect of a phaseout on consumer prices for cable and satellite television is unclear. Even so, another 15 selected stakeholders were unsure of the effects or thought there would be no change in consumer prices. Only one cable and satellite provider thought changes to the system would create a downward pricing pressure to lower consumer prices. We provided a draft of this report to the FCC and the U.S. Copyright Office for review and comment. FCC provided technical comments, which we incorporated as appropriate. The U.S. Copyright Office stated that it agreed with our finding that a phaseout of the statutory licenses may be feasible for most market participants. We are sending copies of this report to the appropriate congressional committees, the Chairman of the FCC, and the Register of Copyrights. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. The Satellite Television Extension and Localism Act Reauthorization Act of 2014 included a provision for us to study and evaluate the possible effects of phasing out statutory licensing of the secondary transmission of television broadcast programming. This report examines (1) what is known about the potential feasibility of a phaseout of the statutory licenses and (2) what are selected stakeholder views on the implications of a phaseout of the statutory licenses. To address both objectives, we reviewed relevant statutes and regulations of sections 111, 119, and 122 of the United States Code, U.S. Copyright Office, GAO, Congressional Research Service, and Federal Communications Commission (FCC) reports to among other things, determine how the licenses work, identify alternatives to statutory licenses, as well as how the marketplace might function if the statutory licenses were phased out. We conducted semi-structured interviews with or obtained written comments from 42 stakeholders—including experts, industry associations, and industry participants (broadcast networks, broadcast station owners, cable and satellite operators, online video distributors, and content producers/copyright owners). We selected these individuals and organizations based on published literature, including U.S. Copyright Office filings and reports, our previous work, and stakeholders’ recognition and affiliation with a segment of the video marketplace, and recommendations from other stakeholders. We conducted a content analysis based on these interviews to determine how the video marketplace has changed in the last 5 years, the potential feasibility of a phaseout of the statutory licenses based on the types of rights licensed in the video marketplace and negotiations stakeholders participate in, and the potential implications of a phaseout of carriage requirements, for the video marketplace, and for consumers’ access to programming and the prices consumers pay for cable and satellite services. We spoke with six experts, including analysts with Pivotal Research, Huber Research, MoffettNathanson, and BTIG. We also spoke with Preston Padden, a telecommunications expert, and Gregory Crawford, former Chief Economist at FCC. We also spoke with eight industry associations and one public interest group: American Cable Association (ACA) Association of Public Television Stations (APTS) Digital Media Association (DiMA) Independent Film and Television Alliance (IFTA) Motion Picture Association of America (MPAA) National Association of Broadcasters (NAB) National Cable and Telecommunications Association (NCTA) NTCA – The Rural Broadband Association We also interviewed 27 entities that participate in the video marketplace—either producing content (or holding the copyrights to content), aggregating content, or distributing content. Table 2 contains those industry participants we interviewed along with their role(s) in the marketplace. To understand how the video marketplace is changing and the factors influencing changes in the consumption and distribution of video content, we reviewed FCC, U.S. Copyright Office, Congressional Research Service, GAO, and industry reports. In addition, to understand the potential feasibility of a phaseout of the statutory licenses, we analyzed FCC’s Cable Service Price survey data from 2010 through 2014, and computer-processed data from Bloomberg Analytics on nationwide use of cable and satellite video services and trends in cable and satellite subscription rates from 2010 through 2014, the most recent available data. Our data analysis provided context on how the video marketplace operates, such as the number of local broadcast television stations carried by cable operators, and how changes in the video marketplace make a phaseout of the statutory licenses appear feasible. FCC’s cable price survey was based on a stratified random sample design with selection probabilities proportional to the size of the community and a response rate of around 97 percent. Following FCC’s survey methodology, we analyzed FCC’s 2010 through 2014 cable price survey data using complex survey software accounting for the sample design and weights to produce generalizable estimates to the population of communities, on a per-subscriber basis. Because FCC’s cable price survey followed a probability procedure based on random selections, the selected sample is only one of a large number of samples that might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of the particular sample’s results as a 95 percent confidence interval (e.g., plus or minus 5 percent). This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. Within this report, all numerical estimates based on FCC data have margins of error of plus or minus 3.2 percent or less of the value of those numerical estimates. To determine how a phaseout of the statutory licenses could impact consumer access to cable and satellite television service and television programming, we used U.S. Copyright Office calendar year 2014 Statement of Account data to understand the number of satellite subscribers receiving distant signals. Statement of Account filings covering 2014 were the most recent full year of data available. To understand how consumer access might be affected by a phase out, GAO reviewed a summary of SNL Kagan LC data, provided by FCC, on local broadcast television station signal blackouts from 2011 through 2015. We assessed the reliability of the data used in this report by reviewing existing information about the data and the systems that produced them and interviewing officials from FCC and the U.S. Copyright Office about measures taken to ensure the reliability of the data. We determined the data were sufficiently reliable for our purposes. We conducted this performance audit from June 2015 to May 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, Derrick Collins (Assistant Director), Amy Abramowitz, Sarah Arnett, Michael Clements, Juan Garcia, Samuel Hinojosa, David Hooper, Sara Ann Moessbauer, Malika Rice, Amy Rosewarne, Jerome Sandau, Sonya Vartivarian, and Betsey Ward-Jenks made key contributions to this report. | Most U.S. households rely on cable or satellite operators to watch television broadcast programming. These operators are able to provide their subscribers with broadcast programming—including local news—by retransmitting local broadcast television stations' over-the-air signals. Three statutory licenses permit operators to offer copyrighted broadcast programming in return for paying a government-set royalty fee. For 2014, these fees totaled about $320 million. Congress created statutory licenses as a cost-effective way for operators to air broadcast programming without obtaining permission to do so from those that own the copyrights for this programming. However, changes in the video marketplace have led some industry stakeholders to question the need for the licenses. The Satellite Television Extension and Localism Reauthorization Act of 2014 included a provision for GAO to review possible effects of phasing out the statutory licenses. This report addresses (1) what is known about the feasibility of phasing out the statutory licenses and (2) views of selected stakeholders on the implications of such a phaseout. GAO analyzed FCC's cable price data from 2010 to 2014 and the U.S. Copyright Office's royalty data from 2014, the most recently available; reviewed relevant laws and reports; and interviewed 42 industry stakeholders, selected for their role in the video marketplace and expertise on the issue. A phaseout of the statutory licenses for broadcast programming may be feasible for most participants in the video marketplace, although there may be statutory implications for the “carriage requirements” governing which local broadcast television stations are carried by cable and satellite operators. These licenses allow cable and satellite operators to carry copyrighted content, such as television shows and movies, embedded in local broadcast stations' signals to their subscribers' television sets without negotiating with individual copyright owners. At the same time, these cable and satellite operators also engage in market-based negotiations to make some or all of this content available in other contexts, such as online. Of the 42 selected stakeholders GAO interviewed, 21 either use the statutory licenses or have their content provided through the statutory licenses. 20 of these 21 stakeholders—including content producers, broadcast networks, and cable and satellite operators—also engage in market-based negotiations to license broadcast content for video-on-demand or online viewing. Therefore, for stakeholders representing these business interests, a market-based approach to licensing secondary transmission rights may be feasible. However, some participants in the video marketplace—most notably, public television and small cable operators—may face logistical challenges and financial constraints in the event of a phaseout of the statutory licenses. Phasing out the statutory licenses could have implications for the “must-carry” and “carry-one, carry-all requirements,” which require cable and satellite operators, respectively, to carry the signals of local broadcast television stations upon request. As GAO has previously reported, the must-carry requirement could become impractical if Congress phased out the statutory license that applies to cable operators, as these operators could find themselves in the paradoxical position of being required to transmit the copyrighted content on a local broadcast television station's signal for which they may not have the legal right to air. In addition, according to Federal Communications Commission (FCC) and the U.S. Copyright Office, the carry-one, carry-all requirement would no longer apply to satellite operators if the applicable statutory license were phased out because the requirement is premised on the use of the license. The 42 selected stakeholders GAO interviewed varied in their support for a phaseout of the statutory licenses and many stakeholders were uncertain about the potential effects on the marketplace and consumers. For example: 15 supported a full or partial phaseout; 13 did not have a position; and 14 did not support a phaseout, because most believe the current system works, About half were uncertain how a phaseout would affect the video marketplace. This uncertainty stems from uncertainty over how the carriage requirements may change and the video marketplace would respond; 10 thought a phaseout would affect competition in the market, but differed on whether this would increase or decrease programming costs. 6 thought consumers' access to programming would be negatively affected, 7 thought diversity of programs offered would decrease, and 13 thought consumer prices would rise. |
The LDA requires lobbyists to register with the Secretary of the Senate and the Clerk of the House and to file quarterly reports disclosing their lobbying activity. Lobbyists are required to file their registrations and reports electronically with the Secretary of the Senate and the Clerk of the House through a single entry point. Registrations and reports must be publicly available in downloadable, searchable databases from the Secretary of the Senate and the Clerk of the House. No specific statutory requirements exist for lobbyists to generate or maintain documentation in support of the information disclosed in the reports they file. However, guidance issued by the Secretary of the Senate and the Clerk of the House recommends that lobbyists retain copies of their filings and documentation supporting reported income and expenses for at least 6 years after they file their reports. The LDA requires that the Secretary of the Senate and the Clerk of the House provide guidance and assistance on the registration and reporting requirements and develop common standards, rules, and procedures for LDA compliance. The Secretary of the Senate and the Clerk of the House review the guidance semiannually. It was last reviewed December 15, 2014. The last revision occurred on February 15, 2013, to (among other issues) update the reporting thresholds for inflation. The guidance provides definitions of terms in the LDA, elaborates on the registration and reporting requirements, includes specific examples of different scenarios, and provides explanations of why certain scenarios prompt or do not prompt disclosure under the LDA. The Secretary of the Senate and Clerk of the House’s Offices told us they continue to consider information we report on lobbying disclosure compliance when they periodically update the guidance. In addition, they told us they e-mail registered lobbyists quarterly on common compliance issues and reminders to file reports by the due dates. The LDA defines a lobbyist as an individual who is employed or retained by a client for compensation, who has made more than one lobbying contact (written or oral communication to covered officials, such as a high ranking agency official or a Member of Congress made on behalf of a client), and whose lobbying activities represent at least 20 percent of the time that he or she spends on behalf of the client during the quarter. Lobbying firms are persons or entities that have one or more employees who lobby on behalf of a client other than that person or entity. Figure 1 provides an overview of the registration and filing process. Lobbying firms are required to register with the Secretary of the Senate and the Clerk of the House for each client if the firms receive or expect to receive over $3,000 in income from that client for lobbying activities. Lobbyists are also required to submit an LD-2 quarterly report for each registration filed. The LD-2s contain information that includes: a list of individuals who acted as lobbyists on behalf of the client the name of the lobbyist reporting on quarterly lobbying activities; the name of the client for whom the lobbyist lobbied; during the reporting period; whether any lobbyists served in covered positions in the executive or legislative branch such as high ranking agency officials or congressional staff positions, in the previous 20 years; codes describing general issue areas, such as agriculture and education; a description of the specific lobbying issues; houses of Congress and federal agencies lobbied during the reporting period; and reported income (or expenses for organizations with in-house lobbyists) related to lobbying activities during the quarter (rounded to the nearest $10,000). The LDA also requires lobbyists to report certain political contributions semiannually in the LD-203 report. These reports must be filed 30 days after the end of a semiannual period by each lobbying firm registered to lobby and by each individual listed as a lobbyist on a firm’s lobbying report. The lobbyists or lobbying firms must: list the name of each federal candidate or officeholder, leadership political action committee, or political party committee to which he or she contributed at least $200 in the aggregate during the semiannual period; report contributions made to presidential library foundations and presidential inaugural committees; report funds contributed to pay the cost of an event to honor or recognize a covered official, funds paid to an entity named for or controlled by a covered official, and contributions to a person or entity in recognition of an official, or to pay the costs of a meeting or other event held by or in the name of a covered official; and certify that they have read and are familiar with the gift and travel rules of the Senate and House and that they have not provided, requested, or directed a gift or travel to a member, officer, or employee of Congress that would violate those rules. The Secretary of the Senate and the Clerk of the House, along with USAO are responsible for ensuring LDA compliance. The Secretary of the Senate and the Clerk of the House notify lobbyists or lobbying firms in writing that they are not complying with the LDA reporting. Subsequently, they refer those lobbyists who fail to provide an appropriate response to USAO. USAO researches these referrals and sends additional noncompliance notices to the lobbyists or lobbying firms, requesting that they file reports or terminate their registration. If USAO does not receive a response after 60 days, it decides whether to pursue a civil or criminal case against each noncompliant lobbyist. A civil case could lead to penalties up to $200,000 for each violation, while a criminal case—usually pursued if a lobbyist’s noncompliance is found to be knowing and corrupt—could lead to a maximum of 5 years in prison. Generally, under the LDA, within 45 days of being employed or retained to make lobbying contacts on behalf of a client, the lobbyist must register by filing an LD-1 form with the Clerk of the House and the Secretary of the Senate. Thereafter, the lobbyist must file quarterly disclosure (LD-2) reports detailing the lobbying activities. Of the 3,112 new registrations we identified for the third and fourth quarters of 2014 and the first and second quarters of 2015, we matched 2,743 of them (88.1 percent) to corresponding LD-2 reports filed within the same quarter as the registration. These results are consistent with the findings we have reported in prior reviews. We used the House lobbyists’ disclosure database as the source of the reports. We also used an electronic matching algorithm that allows for misspellings and other minor inconsistencies between the registrations and reports. Figure 2 shows lobbyists filed disclosure reports as required for most new lobbying registrations from 2010 through 2015. The Clerk of the House and Secretary of the Senate will follow up with newly filed registrations where quarterly reports were not filed as part of their regular enforcement procedures. If the Clerk and the Secretary of the Senate are unsuccessful in bringing the lobbyist into compliance, they may refer those cases to the USAO as described earlier in figure 1. For selected elements of lobbyists’ LD-2 reports that can be generalized to the population of lobbying reports our findings have been consistent from year to year. Most lobbyists reporting $5,000 or more in income or expenses provided written documentation to varying degrees for the reporting elements in their disclosure reports. For this year’s review, lobbyists for an estimated 93 percent of LD-2 reports provided written documentation for the income and expenses reported for the third and fourth quarters of 2014 and the first and second quarters of 2015. Figure 3 shows that for most LD-2 reports, lobbyists provided documentation for income and expenses for sampled reports from 2010 through 2015. Figure 4 shows that for some LD-2 reports, lobbyists did not round their income or expenses as the guidance requires. In 2015, we identified 31 percent of reports that did not round reported income or expenses according to the guidance. We have found that rounding difficulties have been a recurring issue on LD-2 reports from 2010 through 2015. As we previously reported, several lobbyists who listed expenses told us that based on their reading of the LD-2 form they believed they were required to report the exact amount. While this is not consistent with the LDA or the guidance, this may be a source of some of the confusion regarding rounding errors. In 2015, 7 percent of lobbyists reported the exact amount of income or expenses. The LDA requires lobbyists to disclose lobbying contacts made to federal agencies on behalf of the client for the reporting period. This year, of the 80 LD-2 reports in our sample, 37 reports disclosed lobbying activities at federal agencies. Of those, lobbyists provided documentation for all lobbying activities at executive branch agencies for 21 LD-2 reports. Figures 5 through 8 show that lobbyists for most LD-2 reports provided documentation for selected elements of their LD-2 reports from 2010 through 2015. Lobbyists for an estimated 85 percent of LD-2 reports in our 2015 sample filed year-end 2014 LD-203 reports for all lobbyists listed on the report as required. All but four firms with reports selected in our sample filed the year-end 2014 LD-203s for the firm. Of those four firms, three filed after we contacted them. Figure 9 shows that lobbyists for most lobbying firms filed contribution reports as required in our sample from 2010 through 2015. All individual lobbyists and lobbying firms reporting lobbying activity are required to file LD-203 reports semiannually, even if they have no contributions to report, because they must certify compliance with the gift and travel rules. The LDA requires a lobbyist to disclose previously held covered positions in the executive or legislative branch, such as high ranking agency officials and congressional staff, when first registering as a lobbyist for a new client. This can be done either on the LD-1 or on the LD-2 quarterly filing when added as a new lobbyist. This year, we estimate that 21 percent of all LD-2 reports may not have properly disclosed one or more previously held covered positions as required. As in our other reports, some lobbyists were still unclear about the need to disclose certain covered positions, such as paid congressional internships or certain executive agency positions. Figure 10 shows the extent to which lobbyists may not have properly disclosed one or more covered positions as required from 2010 through 2015. Lobbyists amended 7 of the 80 LD- 2 disclosure reports in our original sample to make changes to previously reported information after we contacted them. Of the 7 reports, 5 were amended after we notified the lobbyists of our review, but before we met with them. An additional 2 of the 7 reports were amended after we met with the lobbyists to review their documentation. We consistently find a notable number of amended LD-2 reports in our sample each year following notification of our review. This suggests that sometimes our contact spurs lobbyists to more closely scrutinize their reports than they would have without our review. Table 1 lists reasons lobbying firms in our sample amended their LD-1 or LD-2 reports. As part of our review, we compared contributions listed on lobbyists’ and lobbying firms’ LD-203 reports against those political contributions reported in the Federal Election Commission (FEC) database to identify whether political contributions were omitted on LD-203 reports in our sample. The sample of LD-203 reports we reviewed contained 80 reports with contributions and 80 reports without contributions. We estimate that overall for 2015, lobbyists failed to disclose one or more reportable contributions on 4 percent of reports. For this element in prior reports, we reported an estimated minimum percentage of reports based on a one-sided 95 percent confidence interval rather than the estimated proportion as shown here. Estimates in the table have a maximum margin of error of 9.6 percentage points. The year to year differences are not statistically significant. Table 2 illustrates that from 2010 through 2015 most lobbyists disclosed FEC reportable contributions on their LD-203 reports as required. In 2015, 10 LD-203 reports were amended in response to our review. As part of our review, 77 different lobbying firms were included in our 2015 sample of LD-2 disclosure reports. Consistent with prior reviews, most lobbying firms reported that they found it “very easy” or “somewhat easy” to comply with reporting requirements. Of the 77 different lobbying firms in our sample, 23 reported that the disclosure requirements were “very easy,” 42 reported them “somewhat easy,” and 10 reported them “somewhat difficult” or “very difficult”. (See figure 11). Most lobbying firms we surveyed rated the definitions of terms used in LD-2 reporting as “very easy” or “somewhat easy” to understand with regard to meeting their reporting requirements. This is consistent with prior reviews. Figures 12 through 16 show what lobbyists reported as their ease of understanding the terms associated with LD-2 reporting requirements from 2012 through 2015. U.S. Attorney’s Office (USAO) officials stated that they continue to have sufficient personnel resources and authority under the LDA to enforce reporting requirements. This includes imposing civil or criminal penalties for noncompliance. Noncompliance refers to a lobbyist’s or lobbying firm’s failure to comply with the LDA. According to USAO officials, they have one contract paralegal specialist who primarily handles LDA compliance work. Additionally, there are five civil attorneys and one criminal attorney whose responsibilities include LDA compliance work. In addition, USAO officials stated that the USAO participates in a program that provides Special Assistant United States Attorneys (SAUSA) to the USAO. Some of the SAUSAs assist with LDA compliance by working with the Assistant United States Attorneys and contract paralegal specialist to contact referred lobbyists or lobbying firms who do not comply with the LDA. USAO officials stated that lobbyists resolve their noncompliance issues by filing LD-2s, LD-203s, or LD-2 amendments or by terminating their registration, depending on the issue. Resolving referrals can take anywhere from a few days to years, depending on the circumstances. During this time, USAO uses summary reports from its database to track the overall number of referrals that are pending or become compliant as a result of the lobbyist receiving an e-mail, phone call, or noncompliance letter. Referrals remain in the pending category until they are resolved. The category is divided into the following areas: “initial research for referral,” “responded but not compliant,” “no response /waiting for a response,” “bad address,” and “unable to locate.” USAO focuses its enforcement efforts primarily on the responded but not compliant group. Officials say USAO attempts to review pending cases every 6 months. Officials told us that after four unsuccessful attempts have been made, USAO confers with both the Secretary of the Senate and the Clerk of the House to determine whether further action should be taken. In some cases where the lobbying firm is repeatedly referred for not filing disclosure reports but does not appear to be actively lobbying, USAO suspends enforcement actions. USAO monitors these firms, including checking the lobbying disclosure databases maintained by the Secretary of the Senate and the Clerk of the House. If the lobbyist begins to lobby again, USAO will resume enforcement actions. USAO received 2,417 referrals from both the Secretary of the Senate and the Clerk of the House for failure to comply with LD-2 reporting requirements cumulatively for filing years 2009 through 2014. Table 3 shows the number and status of the referrals received and the number of enforcement actions taken by USAO in its effort to bring lobbying firms into compliance. Enforcement actions include USAO attempts to bring lobbyists into compliance through letters, e-mails, and calls About 52 percent (1,256 of 2,417) of the total referrals received are now compliant because lobbying firms either filed their reports or terminated their registrations. In addition, some of the referrals were found to be compliant when USAO received the referral. Therefore no action was taken. This may occur when lobbying firms respond to the contact letters from the Secretary of the Senate and Clerk of the House after USAO received the referrals. About 48 percent (1,150 of 2,417) of referrals are pending further action because USAO could not locate the lobbying firm, did not receive a response from the firm after an enforcement action, or plans to conduct additional research to determine if it can locate the lobbying firm. The remaining 11 referrals did not require action or were suspended because the lobbyist or client was no longer in business or the lobbyist was deceased. LD-203 referrals consist of two types: LD-203(R) referrals represent lobbying firms that have failed to file LD-203 reports for their lobbying firm and LD-203 referrals represent the lobbyists at the lobbying firm who have failed to file their individual LD-203 reports as required. USAO received 1,672 LD-203(R) referrals (cumulatively from 2009 through 2014) and 2,979 LD-203 referrals (cumulatively from 2009 through 2013 from the Secretary of the Senate and the Clerk of the House for lobbying firms and lobbyists for noncompliance with reporting requirements. LD-203 referrals may be more complicated than LD-2 referrals because both the lobbying firm and the individual lobbyists within the firm are each required to file a LD-203. However, according to USAO officials, lobbyists employed by a lobbying firm typically use the firm’s contact information and not the lobbyists personal contact information. This makes it difficult to locate a lobbyist who may have left the firm. USAO officials reported that, while many firms have assisted USAO by providing contact information for lobbyists, they are not required to do so. According to officials, USAO has difficulty pursuing LD-203 referrals for lobbyists who have departed a firm without leaving forwarding contact information with the firm. While USAO utilizes web searches and online databases including LinkedIn, Lexis/Nexis, Glass Door, Facebook and the Sunlight Foundation websites to find these missing lobbyists, it is not always successful. When USAO is unable to locate lobbyists because it does not have forwarding contact information to find a lobbyist who has left a firm, USAO has no recourse to pursue enforcement action, according to officials. Table 4 shows the status of LD-203 (R) referrals received and the number of enforcement actions taken by USAO in its effort to bring lobbying firms into compliance. About 53 percent (888 of 1,672) of the lobbying firms referred by the Secretary of the Senate and Clerk of the House for noncompliance from the 2009 through 2014 reporting periods are now considered compliant because firms either filed their reports or terminated their registrations. About 47 percent (783 of 1,672) of the referrals are pending further action. Table 5 shows that USAO received 2,979 LD-203 referrals from the Secretary of the Senate and Clerk of the House for lobbyists who failed to comply with LD-203 reporting requirements for calendar years 2009 through 2013. It also shows the status of the referrals received and the number of enforcement actions taken by USAO in its effort to bring lobbyists into compliance. In addition, table 5 shows that 46 percent (1,366 of 2,979) of the lobbyists had come into compliance by filing their reports or are no longer registered as a lobbyist. About 54 percent (1,604 of 2,779) of the referrals are pending further action because USAO could not locate the lobbyist, did not receive a response from the lobbyist, or plans to conduct additional research to determine if it can locate the lobbyist. Table 6 shows that USAO received LD-203 referrals from the Secretary of the Senate and Clerk of the House for 4,131 lobbyists who failed to comply with LD-203 reporting requirements for any filing year from 2009 through 2013. It also shows the status of compliance for individual lobbyists listed on referrals to USAO. About 50 percent (2,070 of 4,131) of the lobbyists had come into compliance by filing their reports or are no longer registered as a lobbyist. About 50 percent (2,061 of 4,131) of the referrals are pending action because USAO could not locate the lobbyists, did not receive a response from the lobbyists, or plans to conduct additional research to determine if it can locate the lobbyists. USAO officials said that many of the pending LD-203 referrals represent lobbyists who no longer lobby for the lobbying firms affiliated with the referrals, even though these lobbying firms may be listed on the lobbyist’s LD-203 report. According to USAO officials, lobbyists who repeatedly fail to file reports are labeled chronic offenders and referred to one of the assigned attorneys for follow-up. According to officials, USAO monitors and reviews chronic offenders to determine appropriate enforcement actions. This may lead to settlements or other successful civil actions. However, instead of pursuing a civil penalty, USAO may decide to pursue other actions such as closing out referrals if the lobbyist appears to be inactive. According to USAO, in these cases, there would be no benefit in pursuing enforcement actions. In August 2015, USAO finalized a settlement in the amount of $125,000 for the Carmen Group to address failure to file for several years. This is the largest civil penalty assessed under the LDA to date. USAO reports that it is currently collecting payments on two cases which will be closed soon and has three cases which should result in further action in the next 6 months. We provided a draft of this report to the Attorney General for review and comment. The Department of Justice provided updated data which we incorporated into the report. We are sending copies of this report to the Attorney General, Secretary of the Senate, Clerk of the House of Representatives, and interested congressional committees and members. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2717 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Our objectives were to determine the extent to which lobbyists are able to demonstrate compliance with the Lobbying Disclosure Act of 1995, as amended (LDA) by providing documentation to support information contained on registrations and reports filed under the LDA; to identify challenges and potential improvements to compliance, if any; and to describe the resources and authorities available to the U.S. Attorney’s Office for the District of Columbia (USAO), its role in enforcing LDA compliance, and the efforts it has made to improve LDA enforcement. We used information in the lobbying disclosure database maintained by the Clerk of the House of Representatives (Clerk of the House). To assess whether these disclosure data were sufficiently reliable for the purposes of this report, we reviewed relevant documentation and consulted with knowledgeable officials. Although registrations and reports are filed through a single web portal, each chamber subsequently receives copies of the data and follows different data-cleaning, processing, and editing procedures before storing the data in either individual files (in the House) or databases (in the Senate). Currently, there is no means of reconciling discrepancies between the two databases caused by the differences in data processing. For example, Senate staff told us during previous reviews they set aside a greater proportion of registration and report submissions than the House for manual review before entering the information into the database. As a result, the Senate database would be slightly less current than the House database on any given day pending review and clearance. House staff told us during previous reviews that they rely heavily on automated processing. In addition, while they manually review reports that do not perfectly match information on file for a given lobbyist or client, staff members will approve and upload such reports as originally filed by each lobbyist, even if the reports contain errors or discrepancies (such as a variant on how a name is spelled). Nevertheless, we do not have reasons to believe that the content of the Senate and House systems would vary substantially. Based on interviews with knowledgeable officials and a review of documentation, we determined that House disclosure data were sufficiently reliable for identifying a sample of quarterly disclosure (LD-2) reports and for assessing whether newly filed lobbyists also filed required reports. We used the House database for sampling LD- 2 reports from the third and fourth quarters of 2014 and the first and second quarters of 2015, as well as for sampling year-end 2014 and midyear 2015 political contributions (LD-203) reports. We also used the database for matching quarterly registrations with filed reports. We did not evaluate the Offices of the Secretary of the Senate or the Clerk of the House, both of which have key roles in the lobbying disclosure process. However, we did consult with officials from each office. They provided us with general background information at our request. To assess the extent to which lobbyists could provide evidence of their compliance with reporting requirements, we examined a stratified random sample of 80 LD-2 reports from the third and fourth quarters of 2014 and the first and second quarters of 2015. We excluded reports with no lobbying activity or with income or expenses of less than $5,000 from our sampling frame. We drew our sample from 45,565 activity reports filed for the third and fourth quarters of 2014 and the first and second quarters of 2015 available in the public House database, as of our final download date for each quarter. Our sample of LD-2 reports was not designed to detect differences over time. However, we conducted tests of significance for changes from 2010 to 2015 for the generalizable elements of our review and found that results were generally consistent from year to year and there were few statistically significant changes after using a Bonferroni adjustment to account for multiple comparisons. These changes are identified in the report. While the results provide some confidence that apparent fluctuations in our results across years are likely attributable to sampling error, the inability to detect significant differences may also be related to the nature of our sample, which was relatively small and was designed only for cross-sectional analysis. Our sample is based on a stratified random selection and it is only one of a large number of samples that we may have drawn. Because each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval. This interval would contain the actual population value for 95 percent of the samples that we could have drawn. The percentage estimates for LD-2 reports have 95 percent confidence intervals of within plus or minus 12.1 percentage points or fewer of the estimate itself. the amount of income reported for lobbying activities; the amount of expenses reported on lobbying activities; the names of those lobbyists listed in the report; the houses of Congress and federal agencies that they lobbied and the issue codes listed to describe their lobbying activity. After reviewing the survey results for completeness, we interviewed lobbyists and lobbying firms to review the documentation they reported as having on their online survey for selected elements of their respective LD- 2 report. Prior to each interview, we conducted a search to determine whether lobbyists properly disclosed their covered position as required by the LDA. We reviewed the lobbyists’ previous work histories by searching lobbying firms’ websites, LinkedIn, Leadership Directories, Legistorm, and Google. Prior to 2008, lobbyists were only required to disclose covered official positions held within 2 years of registering as a lobbyist for the client. The Honest Leadership and Open Government Act of 2007 amended that time frame to require disclosure of positions held 20 years before the date the lobbyists first lobbied on behalf of the client. Lobbyists are required to disclose previously held covered official positions either on the client registration (LD-1) or on an LD-2 report. Consequently, those who held covered official positions may have disclosed the information on the LD-1 or a LD-2 report filed prior to the report we examined as part of our random sample. Therefore, where we found evidence that a lobbyist previously held a covered official position, and that information was not disclosed on the LD-2 report under review, we conducted an additional review of the publicly available Secretary of the Senate or Clerk of the House database to determine whether the lobbyist properly disclosed the covered official position on a prior report or LD-1. Finally, if a lobbyist appeared to hold a covered position that was not disclosed, we asked for an explanation at the interview with the lobbying firm to ensure that our research was accurate. In previous reports, we reported the lower bound of a 90 percent confidence interval to provide a minimum estimate of omitted covered positions and omitted contributions with a 95 percent confidence level. We did so to account for the possibility that our searches may have failed to identify all possible omitted covered positions and contributions. As we have developed our methodology over time, we are more confident in the comprehensiveness of our searches for these items. Accordingly, this report presents the estimated percentages for omitted contributions and omitted covered positions, rather than the minimum estimates. As a result, percentage estimates for these items will differ slightly from the minimum percentage estimates presented in prior reports. In addition to examining the content of the LD-2 reports, we confirmed whether the most recent LD-203 reports had been filed for each firm and lobbyist listed on the LD-2 reports in our random sample. Although this review represents a random selection of lobbyists and firms, it is not a direct probability sample of firms filing LD-2 reports or lobbyists listed on LD-2 reports. As such, we did not estimate the likelihood that LD-203 reports were appropriately filed for the population of firms or lobbyists listed on LD-2 reports. To determine if the LDA’s requirement for lobbyists to file a report in the quarter of registration was met for the third and fourth quarters of 2014 and the first and second quarters of 2015, we used data filed with the Clerk of the House to match newly filed registrations with corresponding disclosure reports. Using an electronic matching algorithm that includes strict and loose text matching procedures, we identified matching disclosure reports for 2,743, or 88.1 percent, of the 3,112 newly filed registrations. We began by standardizing client and lobbyist names in both the report and registration files (including removing punctuation and standardizing words and abbreviations, such as “company” and “CO”). We then matched reports and registrations using the House identification number (which is linked to a unique lobbyist-client pair), as well as the names of the lobbyist and client. For reports we could not match by identification number and standardized name, we also attempted to match reports and registrations by client and lobbyist name, allowing for variations in the names to accommodate minor misspellings or typos. For these cases, we used professional judgment to determine whether cases with typos were sufficiently similar to consider as matches. We could not readily identify matches in the report database for the remaining registrations using electronic means. To assess the accuracy of the LD-203 reports, we analyzed stratified random samples of LD-203 reports from the 29,189 total LD-203 reports. The first sample contains 80 reports of the 9,348 reports with political contributions and the second contains 80 reports of the 19,841 reports listing no contributions. Each sample contains 40 reports from the year- end 2014 filing period and 40 reports from the midyear 2015 filing period. The samples from 2015 allow us to generalize estimates in this report to either the population of LD-203 reports with contributions or the reports without contributions to within a 95 percent confidence interval of within plus or minus 9.6 percentage points or fewer. Although our sample of LD- 203 reports was not designed to detect differences over time, we conducted tests of significance for changes from 2010 to 2015 and found no statistically significant differences after adjusting for multiple comparisons. While the results provide some confidence that apparent fluctuations in our results across years are likely attributable to sampling error, the inability to detect significant differences may also be related to the nature of our sample, which was relatively small and designed only for cross- sectional analysis. We analyzed the contents of the LD-203 reports and compared them to contribution data found in the publicly available Federal Elections Commission’s (FEC) political contribution database. We consulted with staff at FEC responsible for administering the database. We determined that the data are sufficiently reliable for our purposes. We compared the FEC-reportable contributions on the LD-203 reports with information in the FEC database. The verification process required text and pattern matching procedures so we used professional judgment when assessing whether an individual listed is the same individual filing an LD-203. For contributions reported in the FEC database and not on the LD-203 report, we asked the lobbyists or organizations to explain why the contribution was not listed on the LD-203 report or to provide documentation of those contributions. As with covered positions on LD-2 disclosure reports, we cannot be certain that our review identified all cases of FEC-reportable contributions that were inappropriately omitted from a lobbyist’s LD-203 report. We did not estimate the percentage of other non-FEC political contributions that were omitted because they tend to constitute a small minority of all listed contributions and cannot be verified against an external source. To identify challenges to compliance, we used a structured web-based survey and obtained the views from 77 different lobbying firms included in our sample on any challenges to compliance. The number of different lobbying firms is 77, which is less than our sample of 80 reports because some lobbying firms had more than one LD-2 report included in our sample. We calculated responses based on the number of different lobbying firms that we contacted rather than the number of interviews. Prior to our calculations, we removed the duplicate lobbying firms based on the most recent date of their responses. For those cases with the same response date, the decision rule was to keep the cases with the smallest assigned case identification number. To obtain their views, we asked them to rate their ease with complying with the LD-2 disclosure requirements using a scale of “very easy,” “somewhat easy,” “somewhat difficult,” or “very difficult.” In addition, using the same scale we asked them to rate the ease of understanding the terms associated with LD-2 reporting requirements. To describe the resources and authorities available to the U.S. Attorney’s Office for the District of Columbia (USAO) and its efforts to improve its LDA enforcement, we interviewed USAO officials. We obtained information on the capabilities of the system officials established to track and report compliance trends and referrals and on other practices established to focus resources on LDA enforcement. USAO provided us with reports from the tracking system on the number and status of referrals and chronically noncompliant lobbyists and lobbying firms. The mandate does not require us to identify lobbyists who failed to register and report in accordance with the LDA requirements, or determine for those lobbyists who did register and report whether all lobbying activity or contributions were disclosed. Therefore, this was outside the scope of our audit. We conducted this performance audit from May 2015 to March 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The random sample of lobbying disclosure reports we selected was based on unique combinations of house ID, lobbyist and client names (see table 7). See table 8 for a list of the lobbyists and lobbying firms from our random sample of lobbying contributions reports with contributions. See table 9 for a list of the lobbyists and lobbying firms from our random sample of lobbying contribution reports without contributions. In addition to the contact named above, Clifton G. Douglas, Jr. (Assistant Director), Shirley Jones (Assistant General Counsel) and Katherine Wulff (analyst-in-charge) supervised the development of this report. James Ashley, Amy Bowser, Steven Flint, Kathleen Jones, and Amanda Miller, Anna Maria Ortiz, Colleen Taylor, Stewart Small and Robert Robinson made key contributions to this report. Assisting with lobbyist file reviews were Angeline Bickner, Brett Caloia, Michelle Duren, Christopher Falcone, Jennifer Felder, Joseph Fread, Lauren Friedman, Samantha Hsieh, Jennifer Kamara, Jessica Lewis, Alan Rozzi, Shelley Rao, and Edith Yuh. | The LDA, as amended, requires lobbyists to file quarterly lobbying disclosure reports and semiannual reports on certain political contributions. The law also requires that GAO annually audit lobbyists' compliance with the LDA. GAO's objectives were to (1) determine the extent to which lobbyists can demonstrate compliance with disclosure requirements, (2) identify challenges to compliance that lobbyists report, and (3) describe the resources and authorities available to USAO in its role in enforcing LDA compliance, and the efforts USAO has made to improve enforcement. This is GAO's ninth report under the mandate. GAO reviewed a stratified random sample of 80 quarterly disclosure LD-2 reports filed for the third and fourth quarters of 2014 and the first and second quarters of 2015. GAO also reviewed two random samples totaling 160 LD-203 reports from year-end 2014 and midyear 2015. This methodology allowed GAO to generalize to the population of 45,565 disclosure reports with $5,000 or more in lobbying activity, and 29,189 reports of federal political campaign contributions. GAO met with officials from USAO to obtain status updates on its efforts to focus resources on lobbyists who fail to comply. GAO provided a draft of this report to the Attorney General for review and comment. The Department of Justice provided updated data which GAO incorporated into the report. GAO is not making any recommendations in this report. For the 2015 reporting period, most lobbyists provided documentation for key elements of their disclosure reports to demonstrate compliance with the Lobbying Disclosure Act of 1995, as amended (LDA). For lobbying disclosure (LD-2) reports filed during the third and fourth quarter of 2014 and the first and second quarter of 2015, GAO estimates that 88 percent of lobbyists filed initial LD-2 reports as required for new lobbying registrations (lobbyists are required to file LD-2 reports for the quarter in which they first register); the figure below describes the filing process and enforcement; 93 percent could provide documentation for income and expenses, but on 31 percent of these LD-2 reports lobbyists did not correctly follow the guidance to round to the nearest $10,000; and 85 percent filed year-end LD-203 2014 reports as required. These findings are generally consistent with prior reports GAO issued for the 2010 through 2014 reporting periods. As in our other reports, some lobbyists were still unclear about the need to disclose certain covered positions, such as paid congressional internships or certain executive agency positions. GAO estimates that 21 percent of all LD-2 reports may not have properly disclosed one or more previously held covered positions. However, over the past several years of reporting on lobbying disclosure, GAO has found that most lobbyists in the sample rated the terms associated with LD-2 reporting as “very easy” or “somewhat easy” to understand with regard to meeting their reporting requirements. The U.S. Attorney's Office for the District of Columbia (USAO) stated it has sufficient resources and authority to enforce compliance with the LDA. USAO continued its efforts to bring lobbyists into compliance by prompting them to file reports or applying civil penalties. In August 2015, USAO finalized a $125,000 settlement with the Carmen Group, the largest civil penalty settlement for noncompliance. |
Internal Revenue Code (IRC) Section 501(c) establishes 27 categories of tax-exempt organizations. The largest number of such organizations falls under Section 501(c)(3), which recognizes charitable organizations. Generally, charities pay no income taxes on contributions received, but they can be taxed on income generated from unrelated business activities. These charities and related parties may be subject to several additional IRS excise taxes and penalties for certain actions, such as not filing a required tax return. Generally, taxpayers may deduct the amount of any contributions to charities from their taxable income. By 2000, IRS had recognized 1.35 million tax-exempt organizations under Section 501(c), of which 820,000 (60 percent) were charities. Social welfare, labor, and business leagues accounted for 280,000 (21 percent) of the tax-exempt organizations. The remaining organizations (about 19 percent) were exempt under other Section 501(c) categories. At the end of 1999, the assets of Section 501(c)(3) organizations approached $1.2 trillion and their annual revenues approached $720 billion. The term charitable, as defined in the regulations that underlie IRC Section 501(c)(3), includes assisting the poor, the distressed, or the underprivileged; advancing religion; advancing education or science; erecting or maintaining public buildings, monuments, or works; lessening neighborhood tensions; eliminating prejudice and discrimination; defending human and civil rights; and combating community deterioration and juvenile delinquency. An organization must apply for IRS recognition as a tax-exempt charity that strives to meet one or more of these purposes. In general, a charity is to serve broad public interests, rather than specific private interests. Generally, public charities are required to file annual information returns with IRS that are also available to the public. The larger charities file Form 990, Return of Organization Exempt from Income Tax. Smaller charities—with gross receipts of less than $100,000 and total assets of less than $250,000—are allowed to file an abbreviated Form 990-EZ. The smallest charities, with less than $25,000 in gross receipts, and certain other types of organizations, such as churches and certain other religious organizations, are not required to file. The Form 990 is to be filed within about 5 months of the end of the charity’s accounting year, with extensions available. Form 990 has 105 line items on 6 pages, as well as 45 pages of instructions. The data on various finances and activities provide a basis for reviewing whether the organization continues to meet the requirements for tax exemption. The form also has two schedules: Schedule A and Schedule B. Schedule A covers several areas, including compensation of employees and independent contractors earning over $50,000 annually; lobbying activities; sources of revenue; and relationships with noncharitable exempt organizations, such as social welfare organizations. Schedule B is to be filed by certain charities that receive contributions of $5,000 or more from one or more donors. Charities may be required to file other forms in specific situations. Appendix I describes the Form 990. IRS and various stakeholders—such as the states and “charity watchdogs”—oversee charitable operations to protect the public interest in part by reviewing the Forms 990. Certain charities, including those receiving federal and private grants, obtain independent financial audits. To the extent that such audit information is available in conjunction with Forms 990, those doing the oversight have more information on the financial status of the charities, and individuals can make more informed choices about donations to specific charities. Recognizing the importance of public oversight and a “free market”, where charities compete for donations, Congress expanded public disclosure of and access to the Form 990. Such oversight is important to help support charities, inform donors about how their money is spent on a charitable purpose, and stem potential abuses. Our objectives in this review were to analyze the adequacy of (1) publicly reported Form 990 data on charity spending in facilitating public oversight of charities, (2) IRS’s oversight activities for charities, and (3) IRS’s data sharing with state agencies that oversee charities. For the spending data reported by charities, we interviewed IRS officials and experts (e.g., AICPA, the Urban Institute) to learn about charities’ reporting of expense data on the Form 990 and about independent financial audits. We reviewed studies such as those done by the Urban Institute, academicians, and the Chronicle on Philanthropy to better understand the expense data. We analyzed expense data reported by charities on the Form 990. IRS’s Statistics of Income (SOI) Division had available data on charity expenses, but those data only covered up to 1998 and did not include data on “joint-cost allocations” (e.g., allocating selected expenses between education and fundraising). To obtain at least 2 years of joint-cost data, we purchased filing years 1998 and 1999 data from the Urban Institute which contracts with IRS to digitize Form 990 data for the full population of charities that filed Form 990. For some large charities, primarily hospitals, expense data such as by line items of the Form 990 were not available to the Urban Institute. Thus, only joint-cost data and aggregate data for expenses, assets, and revenues for 1999 are presented throughout our report. Because several sources of data were used, data are presented by filing, tax, and fiscal years in this report. For IRS oversight of charities, we talked with responsible IRS officials to identify oversight processes when charities apply for recognition of their tax-exempt status and when IRS examines Forms 990 filed by charities. For these types of oversight, we reviewed documentation on IRS’s processes and criteria used to review applications and examined Forms 990. We also analyzed related data for fiscal years 1996 through 2001. For applications, such IRS data included the number and types of applications received and their dispositions. For examinations, such data focused on the number and types of examinations and their results. In addition, we contacted other federal agencies, such as the Federal Trade Commission, to understand the types of oversight of charities that they conducted and the extent to which they coordinated that oversight with IRS. Appendix V discusses our selection of the agencies and our work. For IRS data sharing with states, we interviewed IRS officials and reviewed IRS documents. We did the same at the National Association of State Charity Officials (NASCO), which represents 38 states that oversee charities to protect public interests. We participated in an October 9, 2001, annual NASCO conference. At the conference, we asked state officials about their oversight and coordination with IRS or others. We talked with Treasury, IRS, and state officials about the tradeoffs of changing the law to allow IRS to share oversight data (e.g., examination results) with state charity officials. We also reviewed related studies and articles. For all three objectives, we collected documents from and talked with officials at various organizations. We talked with officials at the Joint Committee on Taxation about its reports in 2000 on disclosure of tax data on charities and on public, IRS, and state oversight. On the basis of referrals from IRS and NASCO, we talked with and collected documents from officials at the Council on Foundations, the Independent Sector, the GuideStar project at Philanthropic Research Inc., the Direct Marketing Association, and others that were knowledgeable about charity data, oversight, or fundraising. We reviewed documents from and talked with officials at three watchdog groups—Better Business Bureau Wise-Giving Alliance, Charity Navigator, and American Institute of Philanthropy—that oversee charities. We also asked for comments on short sections or summaries of the draft report from the organizations that provided data or perspectives on those sections. We made technical changes to the report where appropriate after receiving their comments. For example, we did not use the 1999 detailed expense data from the Urban Institute after receiving its comments. The Urban Institute did not have the necessary data to resolve certain discrepancies with the detailed data (e.g., reported line item amounts not equaling reported aggregate amount for expenses) before we issued the report. As a result, we deleted analyses of the detailed expense data for 1999 that had been in the draft report. We conducted our work in Washington, D.C., from June 2001 through March 2002, in accordance with generally accepted government auditing standards. We provided a draft of this report to IRS for review and comment. IRS’s and Treasury’s comments are in appendices VI and VII, respectively. Although disclosure of charity spending data can facilitate public oversight, caution in interpreting the data, is warranted. No measures are available on the accuracy of the expense data and substantial discretion in allocating the expenses makes use of the data problematic in comparing charities. Given such data limitations, public oversight of charities cannot rely solely on the expense data reported on the Form 990. A key potential use of data on charities’ spending is to show what portion is spent on charitable purposes through program services (i.e., efficiency). In aggregate, the data show that from 1994 through 1998, charities allocated, on average, 87 percent of their spending to charitable program services and the remainder to fundraising and general management,suggesting a high-level of spending efficiency, as shown in figure 1. These percentages did not vary much by size of the charity. Although Form 990 expense data are a principal source to support donors’ informed judgments about whether to support a charity, the accuracy of the expense data has not been measured. At the same time, however, IRS officials and watchdog groups have expressed concerns about potential or actual inaccuracy in Form 990 expense data. Because efficiency is a criterion that donors may use in selecting among charities, charities have an incentive to report their expenses in a manner that makes them appear to be efficient. IRS has discovered instances in which charity fundraising expenses have been underreported because charities have “netted” such expenses against the funds raised. According to IRS, fundraising expenses include fees paid to professional fundraisers as well as in-house expenses (e.g., salaries) for fundraising. For example, a charity might contract with a professional fundraiser to raise donations. The fundraiser might raise $250,000, charge the charity a fee of $150,000, and give the charity the remaining $100,000. When reporting to IRS, the charity “nets fundraising expenses” by reporting the $100,000 as a direct public contribution and does not report the $150,000 retained by the professional fundraiser as a fee. Such reporting does not comply with IRS instructions, under which the charity should report the full amount raised ($250,000) as the direct public contribution and the fee retained by the fundraiser ($150,000) on line item 30 of the Form 990. As with netting of fundraising expenses, IRS has found that some charities have misreported professional fundraising fees as “other” expenses, but has not measured the extent to which charities do this. In these cases, a charity would report professional fundraising fees on line item 43 of the Form 990, along with other expenses, rather than on line item 30 for such fees. IRS requires charities to itemize expenses on 22 different line items on Form 990 and expressly prohibits reporting professional fundraising or other fees on line item 43 with “other” expenses that are not appropriate for the accompanying 21 line items. Available data do not show the extent to which charities may fail to properly itemize their expenses such as for professional fundraising, but “other” expenses represent a significant portion of all reported expenses. Our analysis showed that for 1994 through 1998, on average, 26 percent of all expenses were reported as “other” expenses, as shown in figure 2. Despite not knowing the extent of misreporting, IRS has been sufficiently concerned that it has taken steps to better ensure charities properly report their expenses, especially for fundraising. Regarding netting of fundraising fees, IRS clarified its reporting instructions in 2001 and publicized the changes. Regarding reporting fundraising (and “other” fees) on the designated Form 990 line item rather than on the “other” expense line item, IRS believed its instructions were clear, but has reiterated them in its Continuing Professional Educational text for fiscal year 2002 and in training for its examiners. IRS makes this text available to tax practitioners and the public to inform them about the proper application of tax laws and regulations. IRS is instructing its examiners during fiscal year 2002 to check whether fundraising is being properly reported and to impose penalties where appropriate. IRS plans to convene a taskforce to consider what projects should be undertaken involving fundraising and Form 990 reporting, but the details have not yet been determined. Within the charitable community, various organizations have been concerned about the accuracy of charitable expense reporting, with concerns often focusing on fundraising expenses. A 1999 Urban Institute study of Form 990 expense data found that 59 percent of 58,127 charities that received public donations either reported zero fundraising expenses or left this line item blank on the Form 990. Using the same criteria as the Urban Institute, our analysis of the Form 990 data from 1994 through 1998 found the number, on average, to be 64 percent, as shown in figure 3. We did similar analyses for all charities, regardless of whether they received public donations. From 1994 to 1998, 69 percent of all charities reported either no fundraising expenses or left this line item blank (line item 15) on the Form 990. We further analyzed how many charities reported no fees paid to professional fundraisers on the Form 990 from 1994 through 1998. On average, over 93 percent of all charities reported either no fees paid to professional fundraisers or left this line item blank (line item 30) on the Form 990. that so many charities would report no fundraising expenses, but acknowledged that several factors could account for low fundraising expenses. For instance, it noted that the smaller the amount of funds raised, the less likely charities may be to incur fundraising expenses. However, the Urban Institute did not indicate the amount that could be raised without incurring fundraising expenses. Thus, it would not be surprising for some charities, such as small ones or newer ones, to have little or no fundraising expenses. The Urban Institute also notes that charities that raise revenues through “special events and activities” (Form 990, line item 9c) may legitimately report little or no fundraising expenses. When we accounted for those reporting special event expenses among those represented in figure 3, we found that, on average for 1994-1998, 34.8 percent of all remaining charities that received contributions did not report fundraising expenses. In addition, various articles have discussed problems in charities’ reporting of fundraising expenses. For example, a May 2000 article in the Chronicle of Philanthropy discussed how some charities leave the “public in the dark” by not reporting fundraising expenses. The article discussed how some charities in three states reported no fundraising expenses on the Form 990, although state records indicated that they had such expenses. Charities have discretion in determining how to charge expenses to program services as well as allocating expenses among the Form 990 functional categories for charitable program services, general management, and fundraising. The differences in the methods used can result in two charities with similar activities allocating their expenses differently among the functional expense categories on the Form 990. Figure 4 shows the three functional expense categories and the related lines for specific expenses. Although the three expense categories differ, their boundaries overlap. Fundraising activities may be mixed with program services, especially when a charity provides education related to its charitable purpose in a fundraising solicitation. Similarly, general management expenses may be mixed with the delivery of program services and fundraising. Charity employees may, for instance, spend time managing the daily support of the charity, spend time participating in raising funds, and spend time providing program services. We analyzed the portions of total program service expenses (line item 13 of Form 990) during 1994 through 1998 that came from (1) grants and specific assistance (line items 22 and 23) that can only be charged to program service expenses to meet the charitable purpose or (2) expenses such as salaries, travel, etc. (line items 24 through 43) that can be charged to the program service, fundraising, and general management categories. It is important to recognize that expenses such as salaries and travel can be charged to program services when they are incurred in connection with meeting the charitable purpose. Table 1 shows the analysis of the types of expenses comprising program service expenses. Charities can use different methods (which are not reported on the Form 990) for charging and allocating expenses. Such differences can affect comparisons across charities. Thus, charity watchdog groups, organizational donors, or others may draw inappropriate conclusions when comparing the expenses charged to program services or allocated across the three functional categories. Neither IRS nor the professional accounting accrediting bodies require or prohibit particular allocation methods. In general, any method for charging or allocating expenses should be reasonable, logical, and consistently applied given the circumstances and facts. Organizations that provide funds or grants to charities are likely to provide guidance or requirements for charging and allocating expenses and to require independent financial audits. Among the methods for allocating joint fundraising costs, the three methods mentioned routinely by accounting professionals and in accounting texts are the: (1) physical units method, (2) relative direct cost method, and (3) stand-alone joint-cost allocation method. Each method can produce a different financial “portrait,” and no one method is appropriate for all circumstances. The method used determines the allocation of expenses among fundraising, program services and general management. For example, suppose a charity contracts with an external fundraiser to conduct a mail solicitation in which the letter combines program service (education) and fundraising text over 100 lines. The fundraiser’s $1 million fee covers expenses for identifying potential donors and creating and mailing the letter. The charity must devise a way to equitably allocate the fundraiser’s expenses. One way is to use the physical units method of allocation. The physical units method uses identifiable, measurable, and calculable physical aspects of fundraising instruments to allocate expenses. In this example, the physical aspects are the number of text lines in the solicitation letter. If 10 lines of text covered fundraising and 90 lines covered program services, an allocation based on counting lines would allow the charity to allocate $100,000 to fundraising and $900,000 to program services. However, this method of allocation may be inappropriate if most of the expenses incurred actually related to the use of the donor mailing list—the value of which relates more to fundraising than to program services. The stand-alone joint-cost-allocation method might provide a more reasonable allocation in this circumstance. If this method were used, and if $750,000 of the fundraiser’s fee covered the value of its mailing list, at least $750,000 of the $1 million in total costs would be for fundraising and no more than $250,000 would count for program services. Thus, the method used can materially influence the allocation of a charity’s expenses. In March 1998, the AICPA published Statement of Position 98-2 (SOP 98-2) “Accounting for Costs of Activities of Not–for–Profit Organizations and State and Local Governmental Entities That Include Fundraising” to provide guidance on the allocation of joint activities, such as those when program services and fundraising are involved. SOP 98-2 was intended to provide consistent, clear, and detailed guidance for reporting joint activities. SOP 98-2 sets three criteria (purpose, audience, and content) that must be met to allocate such joint-cost expenses to the Form 990 program services or management and general categories, rather than to fundraising. The three SOP 98-2 criteria are: Purpose: should show that fundraising activities will help meet a program service or general management purpose. Audience: should show that donors are selected to meet a program service or a general management purpose rather than to contribute only funds. Content: should show that the content of the joint activity supports the charity’s program service or general management purpose. According to AICPA, if any of these criteria are not met, then all expenses should be allocated to fundraising. All three criteria require a call for action in order to allocate expenses to program services. A call for action makes general requests for involvement with an activity or cause, regardless of whether the individual contributes funding to those requesting the involvement. Absent a call to action, SOP 98-2 recognizes the activity as fundraising, and no expenses should be allocated to program services. IRS added a checkbox to the 2001 Form 990 to indicate whether SOP 98-2 had been used to account for joint costs. IRS noted that the purpose was to facilitate the understanding of those reading the Form 990. IRS also is asking for comments on whether the use of SOP 98-2 should be required for certain filers (such as those above a specified amount of assets) to ensure greater uniformity in expense allocations and better comparison of fundraising expenses across charities. According to an AICPA official, charitable organizations may use this guidance, regardless of their accounting method. Caution in relying on Form 990 expense data for public oversight of charities is also warranted because spending efficiency can vary for a number of reasons. Charity watchdog groups, GuideStar, the Urban Institute, and others have spoken against reliance on spending efficiency ratios as the sole measure of a charity’s worthiness. The expense data and related efficiency ratios (such as program service expenses compared with all expenses) do not provide much perspective on other attributes of charities, such as how well they accomplish their charitable purpose, regardless of the amounts spent. Charity watchdogs have evolved to help monitor charities and enhance public oversight. In general, within the resources they have, these watchdog groups use the Form 990 data and other available data to analyze aspects of selected charities. These watchdog groups analyze spending efficiency ratios, but note limitations that could mislead the public on which charities are and are not doing well. Spending efficiency fluctuates with factors such as the popularity of the cause, age of the charity, and type of charitable activities. For example, an established, well- known charity may spend more money on fundraising than a newer charity. A charity also may have wide swings in its spending for charitable purposes if, for instance, those purposes are affected by sudden changes from events such as natural disasters. Also, a charity saving funds to build a facility to serve its charitable purpose may have no program service expenses until adequate funds are raised to begin the project. When evaluating a charity, the public also considers how well a charity accomplishes its charitable purposes, which is not measured by spending data. However, measuring accomplishments and comparing charities on that basis is difficult to do according to the Independent Sector, the Urban Institute, and others (such as academicians). Given the wide diversity in the charity community, no standard rules have been devised to guide charities in reporting accomplishments. The Form 990 has a section that asks charities to report what was accomplished with the program service expenses; IRS’s instructions allow discretion on reporting those accomplishments. Other standards that the charity watchdog groups have suggested for evaluating charities include the manner in which the charity governs itself, raises funds, informs the public, accounts for its finances, prepares budgets and financial documents, and has independent audits or reviews. Each of these standards can be viewed as contributing information that can be useful for evaluating charities. Determining the adequacy of IRS’s oversight of charities is difficult, in part, because IRS has little data on the compliance of charities, and because IRS generally has not established results-oriented goals for its oversight of charities against which to measure progress. Concerns also arise with the adequacy of oversight because IRS has not kept up with growth in the charitable sector. IRS staffing for overseeing tax-exempt organizations fell between 1996 and 2001 while at the same time the number of new applications for tax exemption and the number of Forms 990 filed increased. By shifting staff, IRS has continued to process new applications and, as a consequence, has generally decreased its examinations of existing charities. IRS has recognized that its oversight of charities and other tax-exempt organizations is limited and is formulating plans to measure tax-exempt organizations’ compliance levels and improve its oversight activities. Because IRS does not have an accurate picture of charities’ compliance and it is unclear how its plans would yield such data, IRS lacks key information for making decisions on how much charity oversight is needed, the amount of resources needed for the oversight, and how to improve its use of available resources. In addition, IRS’s plans for improving its oversight activities generally do not define what results it intends to achieve in overseeing charities. IRS oversight of charities primarily consists of two activities. First, IRS reviews and approves applications filed by charities for the recognition of tax-exempt status. Second, IRS annually examines a small percentage of the annual returns filed by charities. Through these activities, IRS tries to ensure that charities merit the recognition of a tax-exempt status as well as the retention of it. In carrying out these two functions, IRS generally is not responsible for taking adverse actions or even suggesting improvements in a charity’s operations based on evidence about how well a charity spends its funds or meets its charitable purpose. Rather, IRS focuses on other issues related to the tax exemption for charities. For instance, in reviewing applications for recognition as tax-exempt charities, IRS focuses on whether applicants plan to undertake activities that meet the criteria for tax-exempt status and that adhere to standards such as restrictions on private benefits accruing to charity officials. Similarly, when examining charities’ Forms 990, IRS checks for compliance with specific requirements applicable to charities, such as meeting a recognized charitable purpose. On the basis of discussions with IRS and state officials, oversight of charities’ efficiency and effectiveness is more likely to be accomplished through the public’s decisions about which charities to support and through states’ efforts to ensure that charities do not abuse their charitable status. As for oversight of applications, IRS revenue agents review the applications of organizations seeking tax-exempt status as charities. If an application is approved, IRS provides a letter to the charity approving its tax-exempt status. Comparing fiscal years 1998 through 2001, the number of applications for charity status submitted to IRS has increased from about 54,000 to about 59,000, or about 9 percent, as shown in table 2. Over all 4 years, the number of applications denied stayed below 100. (See app. III for a description of the application process.) In examinations, IRS seeks to ensure that charities meet federal tax requirements. In examining a return, the revenue agent requests and reviews information from a charity to check the accuracy of items on the return and to verify that a charity is operating to meet a charitable purpose. As shown in table 3, comparing fiscal years 1996 through 2001, the number of annual returns (Forms 990) increased from about 228,000 to about 286,000 (25 percent) while the number examined dropped from 1,450 to 1,237 (15 percent). Thus, IRS examined a smaller percentage of returns and charities—-dropping by 2001 to 0.43 percent and 0.29 percent, respectively. (See app. IV for a description of the examination process.) In addition, examinations are taking longer. (See app. IV for the results.) For fiscal years 1996 through 2001, the time required to examine charity returns nearly tripled when a charity agreed to changes proposed by IRS and increased about seven times when a charity disagreed. IRS officials did not know the reasons for such increases in time and were concerned. Given the concern, IRS has started analyzing ways to better select the most noncompliant returns for examination. The date for completing the analysis was not set, as of March 2002. At least three related reasons help explain the decline in the number of charity examinations. First, IRS has had to adjust the level of charity oversight given many other priorities involving all other types of taxpayers. Second, the resources devoted to oversight dropped for fiscal years 1996 through 2001. Last, IRS moved revenue agents from doing examinations to processing the increased application workload. IRS has many other priorities as the agency that collects the proper amount of revenue to fund the programs that Congress and the executive branch have approved. For example, to deal with millions of individual and business taxpayers, IRS has established four operating divisions organized around the type of taxpayer—Wage and Investment, Small Business/Self-Employed, Large and Mid-Size Business, and Tax-Exempt and Government Entities (TE/GE). TE/GE deals with charities, many other types of exempt organizations, pension plans, Indian tribal governments, and other types of government entities. Each of these activities competes for staffing and funding. Furthermore, although TE/GE has the major charity oversight role among federal agencies, its oversight is limited. The staffing devoted to IRS’s exempt organization function and oversight has declined in recent years. IRS was unable to provide the staffing levels for reviewing charity applications and examining the Form 990. However, from fiscal years 1996 through 2001, total staffing for the exempt function has fallen from 958 to 811, or about 15 percent. For application and examination oversight of all exempt organizations, the staffing fell from 609 to 546, or about 10 percent. A 1997 IRS memorandum pointed out that the staffing level for the entire organization that is now TE/GE had been essentially flat since its creation in 1974 (2,075 in 1974 to 2,123 in 1997) while the workload in terms of the size of the sectors that it regulated had doubled. IRS also shifted revenue agents from doing examinations to help process the increasing application workload. Because all applications must be processed and oversight staff had not increased, IRS moved agents from doing examinations. In fiscal year 2001, IRS took steps to hire about 40 additional staff to help process applications, which would allow revenue agents to return to doing examinations. Given increased workload and declining resources, IRS officials are developing an approach to better gauge the extent and types of compliance issues for tax-exempt organizations and to improve their oversight strategies. However, the current approach would not provide information on compliance problems of the full charitable community. Nor does it define the overall results IRS hopes to achieve in a manner that would facilitate strategic investments of resources and that can be used to assess IRS’s overall progress in improving its oversight strategies. IRS’s new approach is to study segments of the tax-exempt community, that is, market segments, to better understand existing compliance issues. Through these studies, IRS intends to develop indicators of compliance for 35 selected market segments and analyze ways to address compliance problems. According to IRS, the results of the market segment studies are intended to help refine the selection criteria for identifying noncompliant returns for examination as well as help identify other strategies to improve compliance such as additional guidance, clearer instructions, or correspondence on apparent noncompliance. Understanding compliance problems and measuring compliance among the various types of charities also is intended to help determine where to focus resources. As of February 2002, about half of the selected segments dealt with a wide variety of tax-exempt organizations that were not charities and about half dealt with various types of charities such as those for hospitals, colleges, and churches. It was not clear how IRS would use the results to get a picture of compliance across all charities, even though charities account for most of the applications and Forms 990. Without an understanding of the extent and nature of compliance problems across all charities, IRS will have difficulty in making data-driven decisions about the strategies for improving oversight as well as the level of oversight and resources needed. IRS plans to start work on these market segments as resources and data allow. Due to resource limitations, IRS believes that at the present rate the completion of all planned studies will take until fiscal year 2008. During fiscal year 2002, IRS plans to work on six segments. IRS officials said that they selected segments based on experience and judgment. As part of IRS’s overall performance management system, TE/GE has developed a plan to guide its operations. That plan covers TE/GE’s responsibilities, including those for charities. The plan specifies, for instance, the number of employees to be assigned to each activity, the number of applications and examinations IRS expects to process, and how long such activities take, and the satisfaction of tax-exempt organizations with IRS’s services and its employees. For fiscal years 2003 and 2004, TE/GE has proposed staffing increases in two initiatives for known concerns. Although the proposed increases do not focus on charities, their implementation might assist IRS’s charitable oversight. One initiative calls for adding 20 staff to work on improving the quality and quantity of IRS data and studying uses of non-IRS databases. The second initiative requests 30 additional staff to enhance IRS’s examination presence in the exempt organization community. IRS officials said both initiatives would require similar increases in staff during future years. Although TE/GE’s plan and initiatives provide an understanding of what IRS intends to do with its staff and other resources, IRS has not identified what longer-range results it intends to achieve for charities. The planning principles in the Government Performance and Results Act (GPRA) and incorporated into IRS’s Strategic Planning, Budgeting, and Performance Management process call for agencies to define the measurable results they are attempting to achieve, generally over several years. This approach is intended to ensure that agencies have thought through how the activities and initiatives they are undertaking are likely to add up to a meaningful result that their programs are intended to accomplish. The TE/GE plan does not, for instance, provide goals for improving the compliance levels of tax-exempt organizations as a whole or for charities in particular. The plan also does not discuss the basis for IRS’s judgment that the proposed initiatives are the best ways to improve compliance. IRS officials said that longer-range planning could be useful. They noted, however, that their ability to undertake significant initiatives for charities must be considered in the context of IRS’s overall responsibilities. Furthermore, they said that establishing a link between their activities and changes in charities’ compliance is challenging and this makes planning to achieve certain types of results difficult. Many agencies face this challenge. However, GPRA’s and IRS’s planning requirements suggest that the process of focusing on intended results, while often challenging, promotes strategic and disciplined management decisions that will be more likely to be effective than planning that is not results-oriented. IRS data sharing with the states to facilitate state oversight of charities is limited in two ways. First, IRS does not have a process to proactively share data that it is allowed to provide to states, such as data on the denial or revocation of tax-exempt status. Second, federal law generally prohibits IRS from sharing data with states about its reviews of applications for recognition of charities and its examinations of existing charities. State officials believe that accessing IRS’s oversight data would help them allocate resources in overseeing charities. Because federal taxpayer data are subject to statutory confidentiality protections, a number of issues, such as security procedures to protect federal tax data, would need to be considered if data sharing were expanded. Many states oversee charities to protect the public. Although overlap exists, IRS and state oversight differs. IRS focuses on whether the charity meets tax-exempt requirements and complies with federal laws, such as those governing the use of funds for a charitable purpose rather than private gain. States have an interest in whether charitable fundraising is fraudulent and whether the charity is meeting the charitable purpose for which it was created. The majority of states oversee charities through their attorneys general and charity offices. State attorneys general usually have broad power to regulate charities in their states. These states monitor charities for compliance with statutory and common-law standards and have the option of correcting noncompliance through the courts. Furthermore, these states usually regulate the solicitation of funds for charitable purposes. Some states require professional fundraisers to register and file information on specific fundraising contracts. IRS does not have a process to proactively share oversight data with states as permitted by federal law and cannot share much of its data because of legal prohibitions. IRC Sections 6103 and 6104 govern the types of oversight data that IRS can share with states for purposes of overseeing charities. In general, to protect taxpayer confidentiality, Section 6103 prevents IRS from publicly disclosing tax return data for all types of taxpayers, unless explicitly allowed. For charities, this means IRS cannot share most data about examinations. The general restriction against disclosure stems primarily from a right to privacy. Congress only granted the explicit exceptions when it determined that the need for the disclosure of the data outweighed the right to privacy. Criminal and civil sanctions apply for the unauthorized disclosure or inspection of federal tax returns and return data. Although tax-exempt organizations also may assert a right to privacy for interactions with IRS, Congress has developed different disclosure rules and has been expanding the levels of public disclosure. The rationale for disclosure has been that the public supports tax-exempt organizations through direct donations and the tax benefits accruing from their tax- exempt status and, thus, has a strong interest in information about the organizations. Section 6104 exists to provide more disclosure about tax- exempt organizations. For charities, it provides some exceptions to Section 6103 prohibitions so that states can request access to certain IRS data, such as details on revocations of tax-exempt status, to support state oversight of charities. Table 4 shows the types of IRS oversight data that states can and cannot get. The second column indicates IRS data that are available to states, the third column indicates IRS data that state charity officials can request through Section 6104 under certain conditions, and the fourth column indicates IRS data that cannot be shared due to Section 6103 prohibitions. As table 4 shows, the appropriate state officials can obtain details about the final denials of applications, final revocations of tax-exempt status, and notices of a tax deficiency under Section 507, or Chapter 41 or 42. However, IRS does not have a process to regularly share such data. Under Section 6104, IRS cannot share these details with the appropriate state officials unless they formally request these details and disclose their intent to use the data to fulfill their official functions under state charity law. IRS is to ensure that each request is reasonable, relevant, and necessary before releasing the data. Appropriate state officials may ask IRS for details such as examination results, work papers, reports, filed statements, application documents, and other information on determinations. State charity officials can have access to such data if they prove they are an appropriate state official as evidenced by a letter from the state attorney general on the functions and authority of appropriate officials with enough facts for IRS to determine that they can access the data. State charity officials would like regular access to such data. NASCO officials—state officials in 38 states who oversee charities—said that quicker access to information on denied applications and revocations helps stop charities from continuing suspicious activity. If such data are not provided quickly, the charity can dispose of assets or change its operations. Knowing the details about the revocation can also help states track individuals who try to re-establish similar suspicious operations in other states. IRS and the state officials said that data on denials, revocations, and notices are worth sharing. However, from fiscal years 1996 through 2001, few charity applications were denied compared to the over 50,000 applications submitted annually (see table 2), and few examinations resulted in revocations or notices of deficiency compared to over 1,000 examinations closed annually (see table 3), as shown in table 5. IRS actions on charities Denied applications Revoked charities Notices of tax deficiency Data not available. IRS could not provide data on the number of notices of tax deficiency sent for taxes assessed under Section 507 as well as Chapters 41 and 42. However, the number of these notices would be less than the number of examinations that closed with a proposed assessment of any type of tax or a penalty. For fiscal years 1996 through 2001, about 140 examinations, on average, closed annually with some type of tax or penalty assessment against charities. However, IRS lacked a proactive process to regularly inform state officials of steps to be taken to request the data that are available under Section 6104. NASCO officials said many states are not clear about the rules for making these requests and about the types of details that are available. Such requests used to be sent to the district office director. IRS’s reorganization has abolished this position, and IRS has not developed a new process due to its focus on other priorities related to its reorganization. IRS plans to develop a new process. IRS officials said in February 2002 that they started compiling a list of state officials who can receive IRS data on charities. They said that a barrier has been having enough staff to develop the process and negotiate agreements with each state on requesting, transmitting, protecting, and overseeing use of the data. Afterwards, managing this data-sharing process could pose additional resource challenges, depending on how the process would work. Officials said that a proposed system could be ready to discuss with states during the spring of 2002. Although IRS and the states have a common interest in overseeing charities, Section 6103 generally prohibits IRS from sharing data with state agencies about actions, such as examinations of charities. These prohibitions apply even to IRS examinations that result when a state agency refers concerns about specific charities to IRS. Neither can IRS disclose actions on pending or withdrawn applications. State officials who oversee charities believe that Section 6103 hampers their efforts to identify charities that defraud the public or otherwise operate improperly. They offered only anecdotal information on the extent to which such charities exist, but they believed that even a few abusive charities should be pursued because the betrayal of public trust could adversely affect the support given to all charities. At the annual NASCO conference in October 2001, state charity officials offered favorable comments about IRS’s outreach and education efforts, but pointed to problems created by IRS not being able to share data on pending and closed examinations and on pending and withdrawn applications. State officials were particularly concerned about not being able to get feedback on IRS actions on a state referral because of Section 6103 prohibitions. IRS officials said that state referrals are productive to examine, but IRS only can confirm receipt of the referral and whether the tax exemption was revoked. Other concerns expressed by state charity officials with IRS not being allowed to share its oversight data follow. States might waste resources investigating a charity that IRS is examining or has found to be compliant (at least in those areas that IRS examined). States might be unaware of questionable charities for a long time, which allows those charities to continue operating before the states know to pursue them. States might miss opportunities to build better cases against charities when they observe suspicious activities. State officials say that often times they cannot fully use their powers to protect the public because of the lack of readily available data. State officials said that when they learn of a suspicious activity, they need information quickly. The officials said that they could head off a suspicious activity by asserting their state powers, noting that usually the threat of action is enough. However, questionable charities tend to move from state to state. State officials cited a need to compare IRS application data with state charity registration information to quickly deal with registrants that have a questionable past. State charity officials saw an advantage in greater data sharing because IRS does not have the authority to correct the fraudulent or suspicious charitable activities that states can correct. IRS can deny or revoke the charity’s tax-exempt status. As a tax administrative agency, IRS is interested in the tax-exempt status of a charity and whether it should continue. IRS generally does not pursue charity-related fraud. If others (such as states) have proved fraud, that proof can justify denial or revocation of a tax exemption. State charity officials provided examples of how expanded sharing of examination and application data would help the states. Having examination results would allow the states to better monitor the operations of specific charities, determine their compliance with state laws, and correct any noncompliance earlier. Having data about pending and withdrawn applications could help states to be aware of potential problems and be more proactive in protecting the public. According to state officials, during the months that an application is pending, a so-called charity may not be operating to serve charitable purposes, and the public may incorrectly assume that it is tax-exempt and that donations are tax deductible. Treasury officials noted, however, that sharing examination data could be misleading. For example, the examination may involve issues unrelated to the organization’s tax-exempt status. In addition, sharing data about pending applications could result in disclosure of taxpayer information that is entitled to the confidentiality protections of Section 6103 if the taxpayer is not ultimately determined to be tax-exempt. IRS and Treasury officials said that while they see value in the principle of sharing data with states, certain issues need to be considered in determining the scope of data sharing and the protection that should govern such sharing. In addition, the officials noted that both the IRS and states would incur various costs and burdens that need to be balanced in judging which data should be shared, what benefits would be obtained, and which means of sharing data would be the most appropriate. The officials said that they were formulating a position on legislative proposals to expand access, with appropriate taxpayer protections. Treasury officials said they supported a provision included in draft legislation (H.R. 3991, Taxpayer Protection and IRS Accountability Act of 2002) that would permit IRS to share more data with state officials to assist them in administering state laws regulating charitable organizations. Issues raised by IRS regarding any legislative proposals included: Any disclosure of IRS data raises the issues of how the data are used and who uses the data. Understanding these issues is needed to make informed judgments about how best to share the data and to protect against improper disclosures. Granting access to pending applications and examination data raises more challenges compared to those for final application and examination data. These challenges relate in part to concerns about privacy and due process rights. To the extent IRS shares data on issues for which it has not completed its work, use of the data by states would need to recognize this significant limitation. Influencing this issue is the fact that the interests of IRS as a tax administrator do not fully converge with the interests of state charity officials who are not tax officials. The proper legal vehicle for expanding access to IRS’s application and examination data would need to be considered. Two basic legal provisions are Section 6103 (which prevents disclosure) and Section 6104 (which enhances disclosure). Other legislative provisions might be worth considering, depending on the types of data that state charity officials want to access and their intended uses. The legal vehicle chosen would also affect the types and rigors of the controls created to protect the data from improper disclosure and misuse. For example, Section 6103 imposes rigorous requirements on the receipt, storage, and use of the data in all forms (e.g., paper versus electronic) to protect IRS data as well as imposes various training and oversight requirements to ensure conformance to the protections. The controls and protections under Section 6104 generally are considered to be less rigorous. The level of protection that should be provided for data shared with states is an important issue. Considering the previously mentioned issues, IRS and states would need to be aware of the resources required to develop and implement agreements on how the data are to be used, who can use the data, and how the data are to be transmitted, maintained, and protected. In some cases, the resources in terms of staffing, training, space, and computer capabilities could be significant. With assets approaching $1.2 trillion and annual revenues approaching $720 billion, charities represent a substantial presence in American society. The approximately 250,000 active charities range from very small, local efforts to very large, sophisticated hospitals and universities. The public—-including the donors, media, and watchdog groups—IRS, and the states oversee charities. In this oversight framework, IRS has a limited role in considering how well charities are spending funds or accomplishing charitable purposes. Instead, the framework envisions a “free market” in which charities compete for donations, in part, based on such spending or accomplishments. Key to the proper functioning of this marketplace is the availability of reliable data, such as Form 990 data, that donors can use to make informed choices about which charities merit their contributions. However, due to suspected but unmeasured inaccuracy in some charities’ reporting of their expenses and to the range of discretion that charities have in charging and allocating expenses, Form 990 expense data alone are not adequate for public oversight of charities and should be used with caution. Recently, IRS officials have taken steps to address incidents of inaccurate expense reporting and have sought comments on one set of guidance for allocating expenses. IRS’s investment in reviewing charity applications and examining charity returns has not kept pace with the growth in the number of applications and returns. More informed decisions about the resources to devote to this investment could be made if IRS had a better understanding of the type and extent of compliance problems in the charitable community as well as a clear plan for how IRS would use its resources to achieve certain results, such as specific improvements in the compliance of charities. Neither of these is currently available, even though IRS has initiatives to increase its staffing for all exempt organizations. IRS’s plan for improving compliance will not provide data on the extent of the compliance problems and the level of oversight needed across the charitable community. Nor does the plan identify results-oriented goals and strategies, resources needed to accomplish such goals, and measures to gauge its progress toward goal accomplishment. However, given the size of the charitable community, it is unrealistic to expect that IRS would ever review more than a minor portion of charities. Furthermore, certain issues related to charities, such as the extent to which their fundraising activities may be misleading, can be addressed by state officials. Thus, helping to make state oversight of charities as effective as possible would enhance oversight of the charity community. State officials who oversee charities believe data that IRS can provide, but that often does not flow to them, as well as certain data that IRS is prohibited from sharing due to federal protections for taxpayers’ confidentiality, would make their oversight more effective. IRS and Treasury officials recently have started to discuss whether and how to share more data with the states. However, the timing and likely outcomes of these discussions is not yet clear. Also, the specific types of data that would be useful, the best means of sharing that data, the resources needed, and the taxpayer protections that would apply to the data, need to be worked out between federal and state officials. Furthermore, any proposal to change the law that restricts disclosure of certain IRS data to the states would require Congress and Treasury to make policy decisions about the balance between the privacy rights of charities and the public’s interest in more disclosure. To improve oversight by the public, IRS, and the states, we recommend that the commissioner of Internal Revenue ensures (either through the planned market segment studies or other means) that IRS obtains reliable data on compliance issues (including expense reporting) for the full charity community; develops results-oriented goals, strategies (including levels of staffing and other resources to accomplish the goals), and measures to gauge progress in accomplishing those goals when overseeing the charity community; and develops, in consultation with state charity officials, a procedure to regularly share IRS data with states as allowed by federal tax law. In addition, we recommend that IRS, in concert with the Department of the Treasury and state charity officials, identify the specific types of IRS data that may be useful for enhancing state charity officials’ oversight of charities, the appropriate mechanisms for sharing the data, the resources needed, and the types and levels of protections to be provided to prevent improper disclosure and misuse. IRS and Treasury should continue drafting specific legislation to expand state access to selected IRS oversight data and ensure adequate levels of protection for any data that would be shared. We obtained comments on a draft of this report from IRS. (See app. VI.) IRS agreed with the findings in the report and said that the agency would assist in tax administration related to charities and identified actions underway or planned to address our recommendations. We support IRS’s timely actions on our recommendations and believe IRS’s actions are generally responsive to our recommendations. As IRS moves forward with its plans, however, we encourage the commissioner of Internal Revenue to ensure that the actions IRS takes will cover all aspects of our recommendations. For instance, although IRS’s comments indicate that IRS will develop goals and measures for its oversight of charities, the comments do not mention identifying the levels of staffing and other resources needed to accomplish such goals. Although generally agreeing with our findings and indicating that actions were planned or being taken in relation to our recommendations, IRS had certain reservations about the report. First, IRS said that our draft report implied that IRS was not looking at the extent to which charities are properly reporting expenses. Also, according to IRS, the agency has established a task force to develop examination projects for reporting accuracy and that examiners have been instructed to review this issue. Our draft report did recognize these actions. However, the task force had not yet begun to develop projects at the time we did our work, and examiners look at only a very small portion of charities annually. Thus, neither of these actions indicated that IRS would be obtaining reliable overall measures of how accurately charities report their expense data. We did modify our Results in Brief discussion to more explicitly recognize that IRS is beginning to consider how to assess charities’ expense reporting. IRS also said that the draft report did not sufficiently recognize the breadth of IRS’s responsibilities related to tax-exempt organizations. In the draft, we recognized IRS’s other responsibilities both in providing statistics on the portion that charities represent of all tax-exempt organizations and by explicitly noting the range of responsibilities that fall under TE/GE and that those responsibilities compete for staffing and funding with IRS’s efforts to oversee charities. IRS’s letter provided some additional data demonstrating the breadth of IRS’s responsibilities. Finally, IRS did not believe that our draft report provided sufficient recognition of the strategic planning process followed by TE/GE. IRS was, in part, concerned that the draft report indicated it was not clear that IRS’s current plans would yield an accurate picture of charities’ compliance. IRS said TE/GE’s long-term plan to do market segment studies will provide reliable information on the compliance level of various segments of the charitable community. Our draft report described IRS’s strategic planning efforts related to its oversight of charities and thus did recognize that some planning has been done. However, as shown in the comments, those plans do not yet include such things as results-oriented goals or performance measures to assess IRS’s progress. Furthermore, although we believe the market segment studies should provide useful information on charities’ compliance, as discussed earlier, we did not see sufficient evidence to conclude that reliable data on charities’ expense reporting would be generated. In addition, at the time of our report, IRS was requesting public comments on whether it should define additional market segments to study, thus raising uncertainty over whether the currently planned work would cover all charities to yield adequate data on their compliance issues. The Department of the Treasury supports the overall goal of increasing the information IRS can share with state officials who oversee charities. (See app. VII.) Treasury also recognized that appropriate safeguards must be in place to protect the confidentiality of taxpayer information. Treasury officials said that they intend to pursue developing appropriate legislation to expand state access to IRS’s oversight data. Treasury’s plans are consistent with our recommendations. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this report. We will then send copies of this report to the secretary of the Treasury; the commissioner of Internal Revenue; the director, Office of Management and Budget; and other interested parties. We will also make copies available to others on request. The report is also available on GAO’s home page at http://www.gao.gov. This report was prepared under the direction of Tom Short. Other major contributors were Rodney Hobbs, Daniel Mesler, Demian Moore, and Oliver Walker. If you have any questions about this report, please contact Tom Short or me at (202) 512-9110. Tax-exempt organizations recognized by the Internal Revenue Service (IRS) are required to annually file Form 990 or Form 990-EZ (Return of Organization Exempt From Income Tax) if their annual gross receipts are normally more than $25,000. Organizations that have less than $100,000 in gross receipts and total end of year assets of less than $250,000 may use Form 990-EZ. This appendix describes the Form 990 and the information requested and provides a copy of the Form 990. The Form 990 is used primarily as an IRS information return and a public information document. The Form 990 relies on self-reported information from filers. Most of the 27 types of exempt organizations that fall under Section 501(c) use this form along with Section 527 political entities and Section 4947 (a)(1) nonexempt charitable trusts. Section 6033(a)(1) of the Internal Revenue Code (IRC) grants the secretary of the Treasury the power to use any forms or regulations to obtain financial information from 501(c)(3) organizations, such as gross income, receipts, and disbursements. In addition, this section requires all 501(c) organizations to file an annual information return. Form 990 is due by the 15th day of the 5th month of the organization’s accounting cycle, after the close of the taxable year. The Form 990 and any additional schedules can facilitate the public’s ability to scrutinize the activities of tax-exempt entities. For many years, Section 6104 (b) permitted an interested person to request a copy of Form 990 from IRS. However, Congress created Section 6104 (d)(1)(B) to allow interested persons to obtain the Form 990 from the tax-exempt organizations. Furthermore, IRS and state regulatory bodies use the financial information listed on the Form 990 to help monitor activities of charities, including their spending. The states have had significant input in developing the Form 990 and are working with IRS to implement refinements. The first Form 990 covered tax-year 1941. This 2-page form included only three yes/no questions, an income statement, and a balance sheet, although some line items required attached schedules. By 1947, the form (including instructions) had reached 4 pages, although some portions applied only to certain types of organizations. The required financial information was more extensive and incorporated a line item on the income statement for the total compensation of all officers and a $3,000 reporting threshold for contributions made to the organization. IRS also included a checkbox for affiliated organizations that file group returns. By 2001, the Form 990 had 6 pages (10 parts with 105 line items), 2 schedules (A and B) covering 13 pages, and a 45-page instruction book. Figure 5 shows the 6 pages comprising the Form 990. The tables in this appendix describe charities by expenses, revenues, assets, joint-cost reporting, and direct assistance payments. The data are from Parts I and II of the Form 990 (excluding Forms 990-EZ and 990-PF). Data for tables in this appendix represent all Form 990 filers for filing years 1994-1998. IRS’s Statistics of Income (SOI) Division provided data for filing years 1994-1998. SOI data represent a weighted sample based on a stratified random sample of all returns filed by charities in a filing year. Filing year 1998 and 1999 data were purchased from the Urban Institute. Urban Institute data represent the actual population of charity filers. Except for the section on joint-cost reporting and for 1999 totals reported in table 7, for consistency, we used SOI data exclusively for analyzing filing years 1994-1998. SOI provides data on charities by selecting a sample of each year’s Form 990 data and keypunching the sample data into a database. Filing year 1998 was the most recent year for which SOI data were available when we did our work. Data are classified into strata defined by amount of total assets. The sampling rate by stratum ranges from 100 percent for organizations with assets of $10 million or more to 0.45 percent for the smallest asset class in both 1994 and 1995. From these samples, SOI calculates a weighted total number of Form 990 filers for each filing year. Because the SOI data are based on samples, coefficients of variation should be taken into account. Sampling sizes and the corresponding coefficients of variation for selected yearly aggregate categories are presented in table 6. Because coefficients of variation are associated only with aggregate data, and because weighting factors are associated with asset data only, we do not present data other than that which are available at the aggregate level and by asset category. Financial category totals for individual filers include revenues, expenses, and assets. The yearly aggregate totals for these categories are presented in table 7. Each financial category provides information about the resources or operations of charities. Revenue amounts help describe how successful a charity is in raising funds. Asset amounts describe the resources owned by charities that support its mission. Total expenses, in relation to total revenues, help show whether the charity is generating surpluses, deficits, or breaking even from its operations during the period. Total expenses found on line item 17 of the Form 990 are comprised of three “functional” expense categories and “payments to affiliates.” The three types of functional expenses are reported in separate columns of Form 990 Part II as Program Services, General Management and Fundraising. Twenty-two specific object class expenses (line items 22-43) further break down these categories. “Other expenses” (line item 43) is for reporting expenses not captured by line items 22-42. IRS instructions prohibit reporting professional fundraising fees, accounting fees, or legal fees on line item 43, and require other expenses to be itemized on line item 43. Tables 8 and 9 describe the three functional expense categories and the “other” expense category. Functional expenses in Part II are further broken down by 22 specific object classes, including one class for “other” expenses. Each object class, except for—grants and allocations, specific assistance to individuals, and benefits paid to or for members—may be allocated across the three functional expense categories. Table 10 shows these expenses. “Assistance” describes the amount paid out by a charity in support of its charitable purpose. We define assistance as the sum of line items 22 (grants and allocations) and 23 (specific assistance to individuals), which can only be allocated to the program service expense category. Table 11 describes assistance paid out by charities in 1994-1998. The National Taxonomy of Exempt Entities (NTEE) classification system was developed by the National Center for Charitable Statistics (NCCS). IRS uses NTEE codes to categorize charities by 26 major group (A-Z) classifications that are aggregated into 10 broad categories. Because of the difficulties noted above with analyzing the data at a more precise level, we do not present expense, revenue, or asset data on charities at the NTEE level. Tables 12 and 13 describe the NTEE categories. Table 14 presents descriptive data on assets and expenses by size of charity, which are defined by the amount of reported assets. Note that in all years, the largest category includes less than 5 percent of all organizations, but accounts for more than 70 percent of all expenses and more than 80 percent of all assets. IRS recognizes that charities may include a non-fundraising purpose in its solicitation materials (usually an educational component) and directs charities to disaggregate the expenses of a combined fundraising and education solicitation through joint-cost reporting. IRS forbids reporting of fundraising expenses as program service expenses. Charities that included in program service expenses (Column B—Part II, Form 990) any joint- costs from a combined educational campaign and fundraising solicitation must disclose in a separate section how the total joint-costs of all such combined activities were reported in Part II. The disaggregation of joint- cost expenses is by functional expense category—program services, management and general, and fundraising—and not by amount allocated to educational versus fundraising purposes. Since joint-cost reporting can refer to a combined educational and fundraising campaign, all charities reporting joint-costs would be expected to also report some fundraising expenses. That is not the case, as noted in the last column of table 15. In 1998 and 1999, respectively, 7.7 percent and 8.9 percent of charities reporting joint-costs did not report any fundraising expenses. Any organization that is able to satisfy the requirements defined by Congress in Section 501(a) of the Internal Revenue Code (IRC) is entitled to exemption from taxation. To obtain recognition of its tax-exempt status, a charity must apply to the Internal Revenue Service (IRS). This appendix describes the steps in the application process for charities to be recognized as tax-exempt organizations. The steps necessary to obtain recognition of exemption from taxation as a charity involve the submission of written information to IRS. First, the entity makes a decision to meet a charitable purpose as a charitable tax- exempt organization, under the guidelines in Section 501(c)(3) of the IRC. According to the Treasury regulations that underlie section 501(c)(3), “charitable” purposes include: relief of the poor, the distressed, or the underprivileged; advancement of education or science; advancement of religion; erection or maintenance of public buildings, monuments, or works; lessening the burdens of government; lessening of neighborhood tensions; elimination of prejudice and discrimination; defense of human and civil rights secured by law; and combating community deterioration and juvenile delinquency. After deciding on its charitable purpose or purposes, an entity must submit its request for recognition by completing forms for recognition of exemption from taxation. All charitable organizations are required to complete the forms for recognition with three exceptions. The forms are: Form 1023 (Application for Recognition of Exemption from Taxation). Form 8718 (User Fee for Exempt-Organization Determination Letter Request). Form SS-4 (Application for Employer Identification Number). Form 872-C, which is used for organizations wanting an advanced ruling. In addition to the application forms, the organization is required to submit the following documents: Organizational documents containing dissolution and limiting clauses,which limit the organization’s purposes to one or more of the exempt purposes in Section 501(c)(3). A conformed copy of the organization’s articles of incorporation. Four years of financial statement information (projected or actual income and expenses). The signature of an organizational officer or trustee, who is authorized to sign or another person authorized by the power of attorney to sign and send the forms. The appropriate application fee ($150 fee for organizations with gross receipts of less than $10,000 and $500 for organizations with higher gross receipts and for those seeking group exemptions). To help applicants complete the application forms, the IRS suggests the following texts as guides: Publication 557 (Tax-Exempt Status for Your Organization). Publication 598 (Tax on Unrelated Business Income of Exempt Organizations). Publication 578 (Tax Information for Private Foundations and Foundation Managers). Proper preparation of an application for recognition of tax-exempt status involves more than responding to the questions. An applicant must fully describe the activities in which it expects to engage, including the standards, criteria, or other means for carrying out the activities, the sources of receipts, and the nature of expenditures. A mere restatement of purposes or a statement that proposed activities will further the organization’s purposes does not satisfy this requirement. The Exempt Organizations Rulings and Agreements function is in charge of reviewing applications for exemption from taxation. The primary determinations office is located in Cincinnati, Ohio. In addition, staff in six field offices do determinations work. These staff are determination specialists, most of whom are revenue agents, and as of January 2002 accounted for 207 full-time equivalent positions. Revenue agents review applications in order to approve or deny recognition of exemption from taxation, also known as “making a determination.” The Cincinnati office has 112 revenue agents among 10 determination groups. The other 95 revenue agents are divided among the six field offices. The Washington, D.C., office has 50 tax law specialists who also do determinations work. Once applications are received at the Cincinnati office, a decision is made on where the applications should be sent. Depending on the information contained in the application or other circumstances, the application will be: (1) processed by the Cincinnati office (e.g., applications for group rulings, foreign organizations, and cases to be expedited); (2) sent to any of six field offices; or (3) sent to the national office in Washington, D.C., (e.g., when published precedents are lacking). Applications are assigned to offices on the basis of a formula that assumes agents will close, on average, five applications per week. Before using the formula method to assign work, IRS assigned cases based on the number of agents in each office. The Cincinnati office assigned cases to other offices after estimating how much work could be done in Ohio. It stopped this practice because it could not control the backlog that occurred in other offices. Now, the office’s goal is to process all determinations in Cincinnati, except those that need to go elsewhere. A revenue agent reviews the application materials submitted by the organization to ascertain if the purpose or purposes match those allowed for charities (also known as a screening). All of the submitted documents should enable the revenue agent to conclude that the organization satisfied or failed to satisfy the particular IRC requirements for charities. IRS, generally supported by the courts, usually will refuse to recognize an organization’s tax-exempt status unless it submitted sufficient information on its operations and finances. Generally, revenue agents use the relevant tax law as the basis for approving or denying a charity application. Agents compare the application material to the applicable IRC section to check for conformity. Discussions with other agents and managers are also incorporated.During a determination screening, the agents use the following tools: Title 26, Section 501 and other relevant sections of the Internal Revenue Code. Determination Letter Program Procedures (Section 7.4.4 of the Internal Revenue Manual). The Handbook on Exempt Organizations. The Exempt Organizations Continuing Professional Education (CPE) material. Various revenue rulings and revenue procedures. A revenue agent, with the concurrence of the manager, can quickly determine that the application meets all of IRS’s criteria and can close the application on its merit. The application can be processed more rapidly if the articles of incorporation (or articles of organization) include a provision insuring permanent dedication of assets for exempt purposes. Closures on merit can take as little as 10 days, if all information needed is provided during the initial submission of the application materials. If the documentation does not allow the revenue agent to close the application on its merits, it will receive a further review. This usually occurs when: (1) an application is incomplete, (2) the budget or financial information is inconsistent, or (3) the agent cannot conclude that the organization satisfied IRC requirements for charities. While under further review, the revenue agent is required to request additional information from the applicant. When requesting information, the agent should “correctly determine the appropriate scope and depth of information required for making a proper determination.” Here are examples of the ways IRS closes determination applications submitted by charities: Approved Disapproved Withdrawn by applicant Fee not remitted If the exemption is granted, IRS issues a favorable determination letter (Letter 1045) to the charity. If the determination is a proposed denial of the tax exemption for any reason (e.g., the organization failed to establish the basis for the exemption), the revenue agent is required to notify the applicant of the proposed adverse action as well as thoroughly explain the consequences and their appeal rights. The organization can submit additional information to explain any discrepancies related to the adverse action. The agent is to carefully review any new information and reconsider the proposed denial. After any denial is finalized, IRS is required to notify the appropriate state regulatory agency of the applicant’s denial (including failure to establish the exemption). After a favorable determination letter is sent, a charity can undertake additional activities that are consistent with section 501(c)(3) even if it did not mention them in its application. Each letter includes a paragraph stating that the charity should notify IRS of substantial changes in its operations. The purpose is to allow IRS to assess whether the changes affect exemption, private foundation status, unrelated business income tax, excise taxes, etc., and if so, whether IRS needs to begin an examination. According to an IRS official, IRS would like to be informed of the new activities, however, the law does not require charities to notify IRS of the changes. If a charity does not fully and accurately disclose its activities in the application or does not inform IRS of changes, the charity cannot rely on its determination letter to protect itself in the event of an IRS examination. If IRS determines in an examination that the new activities jeopardize exemption, revocation of exemption could be retroactive to the date the new activities were undertaken. In contrast, if activities upon which a revocation is based were disclosed to IRS, the charity may qualify for relief under IRC section 7805(b), and any revocation or adverse action will be prospective only. However, if it wants IRS approval in advance of its change to protect itself against a possible retroactive adverse action, it can request a private letter ruling from the national office in accordance with Revenue Procedure 2001-4. Organizations are notified that the determinations process may take up to 120 days. IRS has indicated that the average time to approve an application is currently 91 days. Delays and backlogs can occur for reasons such as an application was incomplete or inaccurate; taxpayers raised new issues or submitted additional evidence after the proposed determination; and a determination letter contained a misspelling of the organization’s name or an irrelevant addendum. In situations involving disaster relief, emergency hardship programs, or other situations where time is of essence, IRS’s procedures permit charitable relief organizations to request expedited handling when it applies or during the review of its application. The revenue agent is to fully consider the request, grant it if appropriate, and inform the applicant of the decision. The expedited request and the agent’s response should be documented in appropriate work papers. IRS has had expedited request procedures since 1994. The relief organizations created to address the September 11th tragedies received expedited application processing. From September 11, 2001, to March 20, 2002, IRS approved 262 applications for disaster relief organizations under expedited processing. Although the requirements for exemption were not waived, the average time for processing took approximately 7 days. IRS is planning follow-up reviews of all of these organizations and charities where necessary to determine if they are complying with the requirements that govern tax-exempt charities. To measure the quality of the reviews of applications, IRS uses the Tax Exempt Quality Measurement System. This measurement is based on a sample of determination cases that are closed. It is not used to evaluate the quality of the employee’s performance. Rather, the purpose is to measure and improve quality in making determinations on applications. An offshoot of the quality review process is to educate IRS staff involved in the determinations process by highlighting weaknesses that should be corrected. The six quality standards for determination cases deal with: Completeness of the application prior to closing. Timely processing. Technical issues. Work papers support conclusion. Case administration. Customer relations/professionalism. The following describes IRS’s processes for examining returns filed by exempt organizations, including charities. The discussion follows IRS’s processes from selecting returns through reviewing the results of the examinations. In April 2000, the centralized examination management concept was adopted, and examination-related activities were centralized in Dallas to improve consistency, coordination, and use of resources. IRS uses keypunched information to begin the process of identifying returns for examination. All returns (Forms 990) are sent to the Ogden Service Center for processing. When returns are received, Ogden staff keypunch about 20 percent of the line items, such as the tax year, identifying information, and various other data such as program service revenue, contributions, and fundraising expenses. The keypunched information is transferred electronically to the Exempt Organizations Business Master File and, if a return is selected for examination, to the Audit Information Management System (AIMS), which is used to track the status of examinations. At the conclusion of the keypunching process, the return information is available to be queried by another automated system—the Returns Information and Classification System (RICS). RICS allows for searches of returns on the basis of a variety of criteria, including known compliance problems and the size, location, and type of exempt organization such as charities. IRS uses a variety of ways to select returns for examination, but relies primarily on two methods: analysis of automated IRS data on RICS and referrals from outside the examination group. IRS uses RICS to analyze the automated data. RICS applies the criteria selected by the Planning and Program Group to identify returns and line items for potential examination. For example, RICS could be used to identify returns in which charities are reporting political expenditures, allocating expenses to reflect unrelated business income, reporting compensation and wages, but not filing Form 941, and not filing Form 990-T as required. The Classification Unit is responsible for pulling the returns that meet the criteria, and RICS is used to select a random sample of returns. Returns identified by RICS are considered to be general casework, which includes 12 conditions identified as “likely to have issues” that will lead to a change in the tax computation or even revocation of a charity’s tax- exempt status. Examination of these returns is intended to be “limited scope” addressing only the issue identified for which the return was selected. However, according to the Manager, EO Classification, revenue agents review the return to check for consistency with the basic exemption requirement for the charity. Another method used by IRS to select returns for examination is using a referral, that has the highest priority among returns to be examined. IRS receives referrals from parties inside and outside IRS, including the general public, corporations, and private and public sector employees. All referrals are sent to Dallas, where IRS staff input information on each referral into a database that includes the name of the exempt organization, its address, employer identification number, name of informant, and a sequential number. The database includes a paragraph summarizing the potential for an examination and the reason the referral-maker believes an examination is warranted. Returns Classification Specialists work referrals on a first-in, first-out basis and decide whether to send the referral for examination. Specialists use their knowledge of the law and judgment to determine whether the information referred provides a basis for “a reasonable belief of noncompliance.” Afterwards, the database is updated to note whether the referral is sent for examination. Referrals that are viewed as “sensitive” require a second review. Sensitive referrals involve churches or media attention; are received from a Member of Congress, the White House, or from IRS in Washington, D.C.; or are otherwise considered as political or sensitive. Beginning in 1999, the second review was required to be done by a three-person committee, which decides whether to initiate an examination. IRS also receives two other types of referrals. Future year referrals are received from determinations specialists (who review applications for tax- exempt status) and who can request an examination in 2 or 3 years on charities recently granted tax-exempt status. The purpose of these examinations is to determine whether actual charitable activities conform to what was intended when the tax-exempt application was approved. For these referrals, the database is updated to reflect the year the future return is to be selected for examination. If a return has not yet been filed, the examination is deferred until the return is filed and the future year portion of the referral database is so noted. In essence, the future year record acts as a suspense file, and referrals are later reviewed to determine if they should be sent to examination or given another suspense date. The other type of referral is a request for collateral examination. All these referrals are received from the Small Business/Self-Employed Division (SB/SE). SB/SE requests these examinations of tax-exempt organizations in connection with its examinations of small businesses that are related to tax-exempt organizations. To initiate an examination, such as when a group manager (who manages groups of revenue agents) determines that more examination work is needed, a request is sent by e-mail for a specific number of returns at specified revenue agent grade levels. Group managers decide how to distribute the requested number of returns by the specified grade levels in priority order on first-in, first-out basis. For returns selected by RICS, an IRS employee orders the related returns from the Ogden Service Center, which usually takes 6 to 9 weeks to arrive. All examination cases are entered on the automated system EOICS (Exempt Organizations Inventory Control System) that is used to track their status. The examination is conducted to ensure compliance with the provisions of the IRC relating to qualification, reporting and disclosure, and the excise and income taxes related to tax-exempt organizations. A revenue agent starts by contacting an organization to request information to check compliance for specific issues or lines on the return (Form 990). The agent is to compare that information for those issues to the return as well as to verify whether the organization is operating within its stated purpose. The number of return issues being examined can vary. Examinations vary in scope, depending on the type of issues, adequacy of the organization’s books and records, existence of effective internal controls, and size of the entity. Normally, revenue agents are expected to pursue the examination to the point at which they can resolve the specific issues that led to the examination and reasonably conclude that all items necessary for a proper determination of tax-exempt status have been considered. In general, when agents have completed examinations, the completed case files are provided to their group managers for review before the cases are closed. In addition to the group manager’s review of the examination, IRS has a separate group dedicated to reviewing the quality of examinations of exempt organizations such as charities. Reviewers are independent of the examination group and are experienced revenue agents. A reviewer is responsible for measuring and reporting on the quality of the examination and efforts to improve the work of the examination function. Examinations can be reviewed in two ways—special or mandatory review. In special reviews, the computer selects closed examinations randomly, so the reviews represent a statistically valid sample. The reviewer completes a check sheet that asks 57 questions about each closed examination. Most of the questions are to be answered yes or no, with yes being the preferred answer. However, some questions may not be applicable to each review. The questions address the examination quality standards and include topics such as the power of attorney requirements, the scope of the examination, and application of the law. The checklist also asks the reviewer to make an overall judgment on whether the action taken by the revenue agent was appropriate in meeting the examination quality standards. In contrast to special reviews, mandatory reviews are done while the examination is still open. Examinations that are required to be reviewed under mandatory review include those in which: (1) the exempt organization disagrees with a revenue agent’s decisions; (2) the group manager asks for the review to determine whether the actions taken by the agent were correct and appropriate; (3) the agent proposed revoking or modifying the tax-exempt status of a charity; (4) a final revocation for certain other tax-exempt organizations is made; and (5) technical advicewas obtained. Like special reviews, the purpose of a mandatory review is to ensure the quality of cases and to provide quality assurance. Mandatory reviewers use the checklist used by special reviewers to review an examination. In addition, mandatory reviewers are to review whether the work papers adequately document the examination. Since the examinations are open, mandatory reviewers can send them back to the examination group for additional work. If this is done, a memorandum is prepared that discusses the results of the review. The group is to decide if it agrees and to notify mandatory reviewers of the decision. More broadly, trends are monitored and if a theme is identified, a memorandum could be sent to the examination group on the findings or concerns. Table 17 provides data on the number of staff available to do examinations. We also reviewed the number of examination hours charged by IRS staff, as shown in table 18. Because Coordinated Examination Program (CEP) audits may run over several years and take more time, IRS officials suggested that we compare CEP and non-CEP examination hours for charities. Table 19 shows average hours for non-CEP examinations. Table 20 shows the examination hours per charity return reported for CEP examinations. IRS can revoke tax-exempt status for all charities. A revocation basically removes the tax-exempt charter for operating. The organization would have to reapply as a tax-exempt organization and start the process over. Table 21 shows the number and reasons for revocations for fiscal years 1996 through 2001. To determine the extent to which other federal agencies oversee charities and if IRS coordinates its oversight of charities with those agencies, we contacted officials at the Federal Trade Commission (FTC), Federal Emergency Management Agency (FEMA), Federal Bureau of Investigation (FBI), United States Postal Inspection Service (USPIS), and Office of Personnel Management (OPM). This is not an exhaustive analysis of all federal agencies that work with charities. Charities are not specifically under the oversight authority of any single federal agency. In addition, no agencies we spoke with reported ongoing coordination with IRS to identify fraudulent charities or to oversee general charity operations. In most cases, IRS would only be contacted if its expertise as a tax authority were needed in an investigation, or to verify an organization’s tax-exempt status. The following summaries describe the charity oversight activities of the various federal agencies. Within its Economic Crimes Unit, the FBI’s goal is to reduce the amount of economic loss by national and international telemarketing fraud throughout the United States. Additionally, the mission of the FBI’s Governmental Fraud Program is to oversee the nationwide investigation of allegations of fraud related to federal government procurement, contracts, and federally-funded programs. According to FBI officials, the FBI does not have a charity-specific investigation classification. An investigation may involve a charity, but it would likely be due to telemarketing fraud or mail fraud. Since the tragedies of September 11th, the FBI has been scrutinizing some charities for fraudulent activities related to terrorism. An FBI official said the FBI would contact IRS if their tax expertise were needed. FEMA’s mission is to reduce loss of life and property and protect critical infrastructure from all types of hazards through a comprehensive, risk- based, emergency management program of mitigation, preparedness, response, and recovery. FEMA coordinates its disaster relief work through the National Voluntary Organizations Active in Disaster (NVOAD) organization. NVOAD members are 501(c)(3) organizations that are experienced in disaster relief work. FEMA sometimes works with larger, well-established organizations that are not NVOAD members, such as the United Way. According to a FEMA official, FEMA does not work with IRS to assess charities, but IRS has held two training sessions for FEMA on accounting for disaster donations. In addition, FEMA has neither made referrals to IRS nor received specific information from IRS about fraudulent charities. FTC enforces federal antitrust and consumer protection laws. FTC attempts to stop actions that threaten consumers’ opportunities to exercise informed choices. Recently, within the USA Patriot Act of 2001, under Section 1011, “Crimes Against Charitable Americans,” FTC’s authority regarding telemarketing and consumer fraud abuse was broadened. FTC also is a member of an online watchdog site, Consumer Sentinel, which tracks in a database consumer complaints and investigations relating to fraud. FTC often acts on referrals from state attorneys general, but does not coordinate its enforcement activities with IRS except for occasional work on criminal investigations. The USPIS is responsible for combating mail fraud. Thus, fundraising solicitations conducted via the mail are under its authority. USPIS also participates in Consumer Sentinel. USPIS does not coordinate investigations of charities with IRS unless an organizations’ tax-exempt status is being questioned, or if the tax expertise of IRS is needed. A USPIS official said they would welcome referrals from IRS, as IRS is the appropriate agency to take the lead on charity issues. OPM oversees the annual Combined Federal Campaign (CFC). Charities are selected for CFC through an application process. Selection criteria may be found at 5 CFR, Part 950, or, www.opm.gov/cfc/html/regs.htm. The criteria include: Submitting annual audits and annual reports and having a responsible governing board with no conflicts of interest. Audits are only required if the revenue dollar level was over a certain level. Fundraising and administrative costs should be no more than 25 percent of total costs, or be “reasonable” with documentation that suggests appropriate justification. Less than 80 percent of their funding must come from government sources. National and international applicants are approved by OPM. At the local level, the Local Federal Coordinating Committee makes the approvals. OPM’s inspector general conducts risk-based audits of regional CFCs to check that funds are being collected and disbursed according to CFC guidelines and according to the intent of donors. According to officials at the office of inspector general, OPM neither audits charities directly nor works with IRS to identify fraudulent charities. | The tremendous outpouring of charitable donations in response to September 11 has raised concerns about whether some charities are spending too much on fundraising and management and too little on the charitable purposes related to their tax-exempt status. GAO found that Form 990 expense data is inadequate for public oversight purposes because charities have considerable discretion in recording their expenses when it comes to fundraising, management, and charitable services. The Internal Revenue Service (IRS) lacks data on the type and extent of possible compliance issues among charities. Moreover, IRS oversight of charities suffers from a lack of results-oriented goals and strategies. Concerns have also been raised that IRS's resources have not kept pace with the growth in the charitable sector, and some measures suggest that available resources may not be used as effectively as in the past. State officials consider inadequate the charity data IRS shares with them. IRS does not proactively share some data that states are permitted to receive, such as denials and revocations of charities' tax-exempt status. Federal law prohibits sharing some data that state officials believe would be valuable, such as the status and results of examinations of charities' returns. |
Federal statutes and regulations collectively require agencies to establish an ethics program intended to preserve and promote public confidence in the integrity of federal officials through their self-reporting of potential conflicts-of-interest (financial disclosure), through knowledge of post- government employment restrictions (training), and through independent investigations of alleged wrongdoing. A key objective of an ethics program is to provide a formal and systematic means for agencies to prevent and detect ethics violations. The elements of a comprehensive ethics program include (1) a written policy of standards of ethical conduct and ethics guidance; (2) effective training and dissemination of information on ethical standards, procedures, and compliance; (3) monitoring to ensure the ethics program is followed; (4) periodically evaluating the effectiveness of the ethics program; and (5) levying disciplinary measures for misconduct and for failing to take reasonable steps to prevent or detect misconduct. The joint ethics regulation is DOD’s written policy establishing its ethics program. The ethics program emphasizes training and counseling to raise awareness of standards of ethical behavior and to prevent misconduct. DOD’s ethics training requirement includes educating employees about the procedures to follow when considering employment outside of DOD and the post-government employment restrictions that may apply and to inform employees of the resources that are available to them to address ethics questions and concerns. The training includes an initial briefing to introduce employees to ethics regulations, such as conflict-of-interest and procurement integrity rules, and exit briefings to discuss restrictions that may apply once employees leave government service. Additional ethics briefings are held for certain senior employees on an annual basis. DOD’s ethics counseling aims to address employee concerns and questions as they arise. The training and counseling is also to raise awareness so that DOD employees can recognize misconduct and report the matter to ethics officials, inspectors general officials, the head of the command or agency, criminal investigative offices, or any number of DOD hotlines. Responsibility for recognizing and reporting potential misconduct rests with all DOD employees. Additionally, the joint ethics regulation requires ethics officials to track and follow up on reports of potential misconduct. Finally, the DOD regulation requires periodic evaluations of local activities, which implement DOD’s ethics program, to ensure they meet standards. Defense regulations provide that government contractors should have standards of conduct and internal control systems to promote ethical standards, facilitate timely discovery and disclosure of improper conduct in connection with government contracts, and ensure corrective measures are promptly implemented. The regulations provide that contractors should have a written code of business ethics and conduct, an ethics training program for all employees, and to periodically review practices, procedures, policies, and internal controls for compliance with standards of conduct. The federal government has a host of laws and regulations governing the conduct of its employees and contractors. The Compilation of Federal Ethics laws prepared by the United States Office of Government Ethics includes nearly 100 pages of statutes alone. For the purposes of this report, however, we note a few laws relevant to DOD officials whose responsibilities involved participation in DOD’s acquisition process. The statutes are complex, and the brief summaries here are intended only to provide context for the issues discussed in this report. The principal restrictions concerning employment for federal employees after leaving government service are found in 18 U.S.C. 207 and 41 U.S.C. 423 (procurement integrity). The title 18 provision generally prohibits former federal employees and their supervisors from representing non- government entities concerning matters they handled while working for the federal government. Violation of the statute entails criminal penalties. In contrast, the title 41 provision more narrowly applies to contracting officials and also entails civil and administrative penalties. The provision generally restricts employment with a contractor if the official performed certain functions involving the contractor and a contract valued in excess of $10,000,000. The law, however, permits employees to accept compensation “from any division or affiliate of a contractor that does not produce the same or similar products or services” that were produced under the contract. There are also provisions related to post-government employment that are applicable to federal employees’ actions while still in federal service. 18 U.S.C. 208 prohibits government employees from participating in matters in which they have a financial interest. The statute imposes criminal penalties on federal employees who begin negotiating future employment without first disqualifying themselves from any duties related to the potential employer. In addition 41 U.S.C. 423(c) requires officials who participate personally and substantially in a procurement exceeding $100,000 to report promptly contacts by bidders or offerors regarding future employment. The official must either reject the possibility of employment or disqualify himself or herself from further participation in the procurement. DOD’s joint ethics regulation, administered by DOD’s General Counsel, requires DOD to provide training and counseling to educate employees regarding applicable ethics laws and regulations. To implement its ethics program, DOD relies on local ethics counselors within DOD’s military services and agencies to train and counsel employees on conflict-of- interest and procurement integrity rules. Training is to raise individual awareness and to enable DOD employees to recognize misconduct and report any matter to appropriate officials. The joint ethics regulation also requires ethics officials to track and follow up on reports of misconduct. However, DOD lacks knowledge to evaluate the ability of its training and counseling efforts to prevent misconduct and ensure the public trust. DOD has delegated responsibility for training and counseling to more than 2,000 ethics counselors assigned to commands and organizations worldwide. These ethics counselors administer ethics training and briefings, provide advice and counseling, and review employees’ financial disclosure documents as outlined in the joint ethics regulation. At the 12 DOD locations we visited we found training and counseling efforts varied in the content of ethics information provided, who is required to attend training and counseling, and how often the training and counseling is provided. For example, some ethics counselors conduct extensive discussions about employees’ plans upon separation at the exit briefing, some provide written advice, and others distribute pamphlets summarizing employment restrictions. Some ethics counselors have supplemented their annual training because they do not believe that the minimum requirements in the joint ethics regulation—an annual ethics briefing—are sufficient to ensure employees understand employment restrictions both during and after they leave government service. For example, a Navy ethics office offers live, interactive ethics training to all personnel at its location approximately three to four times a year. DOD currently evaluates its ethics program’s performance in terms of process indicators—such as the number of financial disclosure forms completed, the number of ethics counselors, and the amount of time spent by ethics counselors on training and counseling services. According to DOD officials, the information on the number of ethics counselors at each location and the amount of time they spend with employees can provide insight into the level of resources used. However, these process indicators do not provide DOD knowledge of which employees are subject to restrictions, which employees receive training and counseling, the quality and content of training, and who is leaving DOD for employment with contractors. For example, DOD does not know if the population critical to the acquisition process, those employees covered by procurement integrity restrictions, are trained. Further, many ethics counselors could not provide evidence that employees received the annual ethics training. Additionally, DOD does not know whether the training and counseling includes all relevant conflict-of-interest and procurement integrity rules. As shown in Table 1, we found that the ethics counselors we interviewed did not consistently include information on the restrictions provided for in 18 U.S.C. 207, 18 U.S.C. 208, and 41 U.S.C. 423 in their annual ethics briefings for the past 3 years. Training is to raise awareness of procurement integrity and conflict-of- interest rules so DOD employees are able to recognize misconduct and report matters to appropriate officials. Ethics counselors are required to (1) review the facts of an allegation of misconduct and report the allegation to appropriate investigative organizations or the head of the DOD command of the suspected violator and the appropriate contracting officer, if applicable; (2) follow-up with the investigative office until a final determination is made on the allegation; and (3) periodically report on the status of the allegation of misconduct to the military service and defense agencies head ethics official. However, when we asked the ethics officials for information on allegations of misconduct and the status of investigations, they were not tracking or following-up on the status of alleged misconduct cases. For information on reported allegations of potential misconduct the ethics officials referred us to the inspectors general offices. According to inspectors general officials, DOD has not made an attempt to determine the extent that potential misconduct in terms of conflict-of-interest and procurement integrity is reported. The information on reports of potential misconduct is maintained in various files and databases by multiple offices. As a result, DOD has not determined if reports of potential misconduct are increasing or decreasing and why such a change may be occurring. DOD Inspector General’s hotline official told us that anecdotal evidence indicates post-government employment misconduct is a problem, but DOD has no basis for assessing the severity. At the locations we visited, we obtained information from the inspector general officials demonstrating at least 53 cases of potential misconduct reported in the last 5 years. However, ethics officials at the Office of Secretary of Defense and the military headquarters we spoke with were not tracking the status of the reports of potential misconduct. Lacking this knowledge DOD has no assurance that ethics-related laws and regulations are properly followed and that appropriate administrative or disciplinary action is taken. Also, the information on potential misconduct can help DOD understand the extent of the problem and the risk such behavior poses. Concerned about the effectiveness of its efforts to minimize misconduct and prevent violations of conflict-of-interest and procurement integrity rules, DOD has taken actions aimed at enhancing its ethics program. In October 2004, the Deputy Secretary of Defense required (1) personnel who file public financial disclosure reports to certify that they are aware of and have not violated employment restrictions, (2) DOD components to include training on employment restrictions in annual ethics briefings to financial disclosure filers, and (3) DOD components to provide guidance on employment restrictions to all personnel leaving government service. While this directive clarifies the content required in DOD’s training and counseling, no provisions were made to provide knowledge about whether the policy is implemented. Therefore, it is unclear at this time the extent that the actions called for in the directive will improve DOD’s effort to prevent violations of post-government employment restrictions. In November 2004, the acting Undersecretary of Defense asked the Defense Science Board to establish a task force to assess whether DOD has adequate management and oversight processes to ensure the integrity of acquisition decisions. The task force report was due January 31, 2005, and is expected to recommend options for improving checks and balances to protect the integrity of procurement decisions. Currently, the Defense Science Board is briefing preliminary findings to senior DOD officials and Congress. Acknowledging the risk to the acquisition process the United States Attorney for the Eastern District of Virginia announced, in February 2005, the creation of a procurement fraud working group to increase prevention and prosecution of fraud in the federal procurement process. This working group will facilitate the exchange of information among participating agencies, including DOD, and assist them in developing new strategies to prevent and to promote early detection of procurement fraud. Among the ideas and initiatives to be undertaken by the working group are efforts to detect ethics violations and conflicts of interest by current and former agency officials. Defense acquisition regulations provide that government contractors should have standards of conduct and internal control systems that promote ethical standards, facilitate timely discovery and disclosure of improper conduct, and ensure corrective measures are promptly implemented. However, DOD cannot identify nor take action to mitigate risks because it lacks knowledge of its contractors’ efforts to promote ethical standards. Recently a major defense contractor chartered an independent review of its hiring processes of current and former government employees. This review found both gaps in the company’s procedures and a failure to follow written policy, in some cases. Weaknesses in the contractor’s policies, procedures, and structure were identified, and recommendations were made for actions to be taken to mitigate risks. Defense regulations provide that government contractors must conduct themselves with the highest degree of integrity and honesty. Specifically, defense regulations provide that contractors should have (1) a written code of ethical conduct; (2) ethics training for all employees; (3) periodic reviews of its compliance with its code of ethical conduct; (4) systems to detect improper conduct in connection with government contracts; and (5) processes to ensure corrective actions are taken. The seven contractors we visited indicated that DOD had not discussed or reviewed their practices for hiring current and former government employees. While DOD evaluates components of contractors’ financial and management controls, neither the Defense Contract Management Agency nor the Defense Contract Audit Agency—the agencies responsible for oversight of defense contractors’ operations—had assessed the adequacy of contractors’ practices for hiring current and former government employees. DOD’s lack of knowledge of the contractors’ hiring practices and policies prevents DOD from being assured that effective controls are in place to address the risks posed by contractors. In February 2004, a major defense contractor hired an outside entity to conduct an independent evaluation of its hiring policies and practices. This review found that the company relied excessively on employees to self-monitor their compliance with post-government employment restrictions. The review concluded that by relying on employees to monitor their own behavior, the company increased the risk of noncompliance, due to either employees’ willful misconduct or failure to understand complex ethics rules. The independent evaluation of the company’s hiring policies and practices illustrates an opportunity for DOD to leverage knowledge of contractors’ practices to identify and mitigate risks. In general, the review identified lack of management controls as a weakness in the company’s ethics program. Specifically, the review found the company lacked (1) a single focal point for managing its hiring process; (2) centralized management of its hiring process, which made it difficult to implement consistent procedures and effectively monitor efforts; (3) consistent maintenance of pre-hire records; (4) internal audits of its process for hiring former government employees; and (5) sufficient emphasis from senior company management to the ethics program in general and the training program in particular, among other things. As a result of these weaknesses, the company did not know whether employees were following its written policies and procedures addressing post- government employment restrictions. Some contractors we spoke with stated that they used the lessons learned from the company’s independent review to assess their own policies for recruiting, hiring, and assigning of current and former government employees to ensure they are complying with ethical standards. For example, some of the contractors are reviewing company personnel files to identify employees trained as well as former government employees hired. Some contractors were in the process of identifying methods to ensure that information on the hiring and training of former government employees is readily available, such as corporate personnel systems that will provide electronic files to allow the contractor to identify employees with prior DOD experience including contracts on which they worked as well as monitor employees’ post-government career path. Similarly, knowledge of conditions at the company and at other contractors could provide DOD with information to better identify and understand risks to its acquisition process. In an environment where the risk of ethical misconduct can be costly, DOD is missing opportunities to raise the level of confidence that its safeguards protect the public trust. Better knowledge of training and counseling efforts is essential to ensuring that the large numbers of employees who leave DOD for contractors each year are aware of and abide by conflict-of-interest and procurement integrity rules. Finally, enhanced awareness of contractor programs would enable DOD to assess whether the public trust is protected. We are making three recommendations to the Secretary of Defense to take actions to improve DOD’s knowledge and oversight of its ethics program and contractors’ ethics programs to raise the level of confidence that DOD’s business is conducted with impartiality and integrity: Regularly assess training and counseling efforts for quality and content, to ensure that individuals covered by conflict-of-interest and procurement integrity rules receive training and counseling that meet standards promulgated by DOD Standards of Conduct Office. Ensure ethics officials, as required by the joint ethics regulation, track and report on the status of alleged misconduct to the military services and defense agencies head ethics officials. Assess, as appropriate, contractor ethics programs in order to facilitate awareness and mitigation of risks in DOD contracting relationships. DOD provided written comments on a draft of this report. DOD concurred with two of our recommendations and partially concurred with the third. DOD concurred with our recommendation to regularly assess training and counseling efforts for quality and content, and stated that it currently assesses and will continue to assess agencies’ training and counseling efforts to ensure that personnel required to receive such training do so in accordance with applicable standards. As discussed in this report, DOD currently assesses its ethics program’s performance in terms of process indicators—for example, number of financial disclosure forms completed, the number of ethics counselors, and the amount of time spent by ethics counselors on training and counseling. However, as DOD moves forward, its assessments should also provide DOD knowledge of which employees are subject to restrictions, which employees receive training and counseling, and the quality and content of training to ensure its ethics program achieves the goal of raising awareness of conflict-of-interest and procurement integrity rules in order to prevent ethical misconduct. DOD concurred with our recommendation that DOD assess, as appropriate, contractor ethics programs, and stated that it intends to call upon companies throughout the defense industry to reexamine their ethics programs and share best practices. DOD also stated that the recommendation is currently implemented when contracting officers make, prior to awarding a contract, an affirmative determination of responsibility, which includes consideration of the potential contractor’s business practices and the potential contractor’s integrity. We believe assessments of contractor ethics programs would enhance contracting officers’ ability to make such determinations. Knowledge about contractors’ policies and practices for hiring former and current DOD employees would provide DOD more assurance that effective controls are in place to address the risks posed by potential violations of post government employment restrictions. As recent GAO bid protest decisions illustrate, lapses in ethical behavior can have significant consequences. DOD partially concurred with our recommendation that the Secretary of Defense ensure that ethics officials, as required by the joint ethics regulation, track and report on the status of alleged misconduct to the military services and defense agencies head ethics officials. DOD stated that responsibility for tracking and reporting on the status of alleged misconduct resides with Departmental and federal law enforcement agencies, rather than ethics officials. While we agree that responsibility for enforcement should not reside with ethics officials, we believe senior DOD ethics officials should be knowledgeable concerning the scope and extent of ethics violations within the Department. Tracking alleged misconduct cases would provide senior DOD ethics officials knowledge about whether ethics-related laws and regulations are properly followed and that appropriate administrative or disciplinary action is taken. Also, information on alleged misconduct can position DOD to assess the effectiveness of its training and counseling efforts and understand the extent of the problem and the risk such behavior poses. As DOD revises its Joint Ethics Regulation, it should ensure its reporting structure provides for relaying misconduct information to senior DOD ethics officials. Finally, DOD expressed concern that our report may be misinterpreted because it does not accurately capture the full extent of DOD programs. We recognize that the Department’s programs are broader than reflected in our report. Our report identifies opportunities to improve (1) DOD’s efforts to train and counsel its workforce to raise awareness of ethics rules and standards as well as DOD measures of the effectiveness of these efforts and (2) DOD’s knowledge of defense contractors’ programs to promote ethical standards of conduct. Notwithstanding its concerns, however, we note that DOD agreed that our report identifies opportunities to strengthen safeguards for procurement integrity. DOD’s comments are included in appendix II. We are sending copies of this report to the Secretary of Defense and interested congressional committees. We will provide copies to others on request. This report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report or need additional information please contact me at (202) 512-4125 or Blake Ainsworth, Assistant Director, at (202)512-4609. Other major contributors to this report were Penny Berrier, Kate Bittinger, Anne McDonough-Hughes, Holly Reil, and Karen Sloan. To address DOD’s oversight of its agencies’ implementation of ethics regulations we compared DOD’s practices to established management guidelines. We did not determine the effectiveness of post-government employment legal restrictions or the extent to which violations of these restrictions may be occurring. In assessing DOD oversight of its programs, we used the Standards for Internal Control in the Federal Government, Internal Control Management and Evaluation Tool, Office of Management and Budget Circular A-123 regarding management accountability and control, and the United States Sentencing Commission Guidelines Manual. We applied the management control framework to DOD and DOD component ethics programs. To assess DOD’s efforts to train and counsel its workforce to raise awareness and DOD measures of the effectiveness of these efforts, we met with the designated agency ethics official, their designee or ethics counselors in the Office of the Secretary of Defense, Air Force, Army, Navy, Defense Contract Management Agency. In addition to headquarters offices, we selected locations that according to the Federal Procurement Database System and DOD officials spent a large amount of money on acquisitions. Specifically, we met with officials from: (1) Standards of Conduct Office, General Counsel, Office of the Secretary of Defense; (2) General Counsel—Ethics and Personnel Office, Defense Contract Management Agency; (3) Associate Counsel—Ethics and Personnel, Eastern Region, Defense Contract Management Agency, (4) Ethics Office and Associate General Counsel (Fiscal & Administrative Law), Air Force; (5) Air Force Materiel Command, Wright Patterson Air Force Base, Air Force; (6) Electronic Systems Center, Hanscom Air Force Base, Air Force, (7) Deputy General Counsel (Ethics & Fiscal) and Standards of Conduct Office, Army; (8) Army Materiel Command, Fort Belvoir, Army; (9) Communications-Electronics Command Fort Monmouth, Army; (10) Office of General Counsel, Navy; and (11) Naval Air Systems Command, Patuxent River, Navy, (12) Naval Air Warfare Center Weapons Division, China Lake, Navy. We met with five contracting/acquisition offices and nine investigative offices at these locations. To assess DOD’s knowledge of defense contractors’ programs to promote ethical standards of conduct, we interviewed seven defense contractors about their ethics programs and hiring practices of former government employees. Six of the contractors are ranked in the top 10 of defense contractors based on DOD spending in fiscal year 2003. The seventh is a contractor that was in the top 100 of defense contractors based on DOD spending. We attended the annual Defense Industry Initiative Annual Best Practices Forum, 2004. In addition, we reviewed a report to the chairman and board of directors of one major defense contractor responding to concerns about the company’s policies and practices for the hiring of government and former government employees. As part of these efforts, we reviewed relevant Federal ethics laws, the Federal Acquisition Regulation, Defense Federal Acquisition Regulation, DOD policies, directives and guidance governing conflict of interest and procurement integrity rules. We supplemented the DOD and DOD component ethics program information we collected by interviewing officials from the Office of Government Ethics, Department of Justice, Army Contracting Agency, Defense Acquisition Regulations Council, Office of Secretary of Defense Acquisition, Technology, and Logistics Office, World Policy Institute, and the American Federation of Government Employees. We also attended the 26th Annual Council of Governmental Ethics Laws Conference, 2004. We conducted our review from April 2004 to March 2005 in accordance with generally accepted government auditing standards. | In fiscal year 2004, the Department of Defense (DOD) spent more than $200 billion to purchase goods and services. To help ensure defense contracts are awarded fairly and current and former employees do not use their knowledge of DOD acquisition activities to gain financial or other benefits, DOD personnel are required to conduct themselves in a manner that meets federal ethics rules and standards. Regulations require DOD to implement an ethics program and provide that contractors meet certain ethics standards. For this report, GAO assessed (1) DOD's efforts to train and counsel its workforce to raise awareness of ethics rules and standards as well as DOD measures of the effectiveness of these efforts and (2) DOD's knowledge of defense contractors' programs to promote ethical standards of conduct. To implement its ethics program, DOD has delegated responsibility for training and counseling employees on conflict-of-interest and procurement integrity rules to more than 2,000 ethics counselors in DOD's military services and agencies. These efforts vary in who is required to attend training and counseling, the content of ethics information provided, and how often the training and counseling is provided. While some variation may be warranted, DOD lacks the knowledge needed to determine whether local efforts are meeting the objectives of its ethics program--in large part because DOD does not systematically capture information on the quality and content of the training and counseling or employee activity as they relate to ethics rules and restrictions. Specifically, ethics counselors were unable to tell us if people subject to procurement integrity rules were trained. Instead, DOD evaluates its ethics program in terms of process indicators--such as the number of people filing financial disclosure forms, the number of ethics officials providing training and counseling services, and the amount of time ethics officials spend on such activities--which do not provide metrics to assess the effectiveness of local training and counseling efforts. DOD also lacks adequate information on the number and status of allegations of potential misconduct related to conflict-of-interest and procurement integrity rules. Ethics officials did not know of 53 reported allegations of potential misconduct referred to inspectors general offices. DOD has taken several actions since October 2004 aimed at enhancing its ethics program. However, without knowledge of training, counseling, and reported allegations of misconduct, DOD is not positioned to assess the effectiveness of its efforts. DOD's knowledge of defense contractor efforts to promote ethical standards is also limited. Defense regulations provide that contractors should have ethics programs, provide ethics training for all employees and implement systems to detect improper conduct in connection with government contracts. Despite these regulations, DOD had not evaluated the hiring practices of the contractors GAO contacted. Neither the Defense Contract Management Agency nor the Defense Contract Audit Agency--the agencies responsible for oversight of defense contractors' operations--had assessed the adequacy of contractors' practices for hiring current and former government employees. An independent review of one of DOD's largest contractors found that the company lacked the management controls needed to ensure an effective ethics program. Instead, the review found that the company relied excessively on employees to self-monitor their compliance with post-government employment restrictions. The review concluded that by relying on self-monitoring, the company increased the risk of noncompliance, due to either employees' willful misconduct or failure to understand complex ethics rules. |
Between August 1994 and August 1996, enrollment in Medicare risk-contract health maintenance organizations (HMO) rose by over 80 percent (from 2.1 million to 3.8 million), and the number of risk-contract HMOs rose from 141 to 229. As managed care options become increasingly available to Medicare beneficiaries, the need for information that can help them make prudent health care decisions has become more urgent. The need for straightforward and accurate information is also important because in the past some HMO sales agents have misled beneficiaries or used otherwise questionable sales practices to get them to enroll. For most 65-year-olds, notice of coverage for Medicare benefits comes in the mail—a Medicare card from the Health Care Financing Administration (HCFA), which administers the Medicare program. Unless beneficiaries enroll in an HMO, HCFA automatically enrolls them in Medicare’s fee-for-service program. Medicare’s fee-for-service program, available nationwide, offers a standard package of benefits covering (1) hospitalization and related benefits (part A), with certain coinsurance and deductibles paid by the beneficiary, and (2) physician and related services (part B) for a monthly premium ($42.50 in 1996), a deductible, and coinsurance. Medicare part B coverage is optional, though almost all beneficiaries entitled to part A also enroll in part B. Many beneficiaries in the fee-for-service program enhance their Medicare coverage by purchasing a private insurance product known as Medigap. Medigap policies can cost beneficiaries $1,000 a year or more and must cover Medicare coinsurance. Some policies also cover deductibles and benefits not covered under Medicare such as outpatient prescription drugs. Medicare beneficiaries may enroll in a Medicare-approved “risk” HMO if available in their area. Such a plan receives a fixed monthly payment, called a capitation payment, from Medicare for each beneficiary it enrolls. The payment is fixed per enrollee regardless of what the HMO spends for each enrollee’s care. An HMO paid by capitation is called a risk-contract HMO because it assumes the financial risk of providing health care within a fixed budget. Although other types of Medicare managed care exist, almost 90 percent of Medicare beneficiaries now in managed care are enrolled in risk-contract HMOs. Compared with the traditional Medicare fee-for-service program, HMOs typically cost beneficiaries less money, cover additional benefits, and offer freedom from complicated billing statements. Although some HMOs charge a monthly premium, many do not. (Beneficiaries enrolled in HMOs must continue to pay the Medicare part B premium and any specified HMO copayments.) HMOs are required to cover all Medicare part A and B benefits. Many HMOs also cover part A copayments and deductibles and additional services—such as outpatient prescription drugs, routine physical exams, hearing aids, and eyeglasses—that are not covered under traditional Medicare. In effect, the HMO often acts much like a Medigap policy by covering deductibles, coinsurance, and additional services. In return for the additional benefits HMOs furnish, beneficiaries give up their freedom to choose any provider. If a beneficiary enrolled in an HMO seeks nonemergency care from providers other than those designated by the HMO or seeks care without following the HMO’s referral policy, the beneficiary is liable for the full cost of that care. Recently, Medicare allowed HMOs to offer a “point-of-service” (POS) option (also known as a “self-referral” or “open-ended” option) that covers beneficiaries for some care received outside of the network. This option is not yet widely available among Medicare HMOs. Managed care plans’ marketing strategies and enrollment procedures reflect Medicare beneficiaries’ freedom to move between the fee-for-service and managed care programs. Unlike much of the privately insured population under age 65, beneficiaries are not limited to enrolling or disenrolling only during a specified “open season;” they may select any of the Medicare-approved HMOs in their area and may switch plans monthly or choose the fee-for-service program. Thus, HMOs market their plans to Medicare beneficiaries continuously rather than during an established 30- or 60-day period. HMOs and their sales agents, not HCFA, enroll beneficiaries who wish to join a managed care plan. Most beneficiaries have access to at least one Medicare HMO, and more than 50 percent of beneficiaries have at least two HMOs available in their area. In some urban areas, beneficiaries can choose from as many as 14 different HMOs. Each HMO may be distinguished from its competitors by its coverage of optional benefits, cost-sharing arrangements, and network restrictions. As a practical matter, the number of choices is likely to be greater than the number of HMOs because a single HMO may offer multiple Medicare products, each with its own combination of covered benefits and premium levels. In February 1996, Senator Pryor, the Ranking Minority Member of the Senate Special Committee on Aging asked us to examine issues related to the marketing, education, and enrollment practices of health plans participating in the Medicare risk-contract HMO program. Subsequently, he was joined by Committee Chairman Cohen and by Senators Grassley, Breaux, Feingold, and Wyden as corequesters. This report focuses on information that can help beneficiaries become discerning consumers. In particular, the report reviews (1) HCFA’s performance in providing beneficiaries comparative information about Medicare HMOs to assist their decision-making and (2) the usefulness of readily available data that could inform beneficiaries and caution them about poorly performing HMOs. Our study focused on risk-contract HMO plans, which (as of August 1996) enrolled almost 90 percent of Medicare beneficiaries enrolled in managed care. In conducting our study, we reviewed records at HCFA headquarters and regional offices and interviewed HCFA officials, Medicare beneficiary advocates, provider advocates, Medicare HMO managers, and representatives of large health insurance purchasing organizations. We also analyzed enrollment and disenrollment data from HCFA’s automated systems. In addition, we reviewed beneficiary complaint case files and observed certain HCFA oversight and education activities. Finally, we reviewed relevant literature. Our work was performed between October 1995 and August 1996 in accordance with generally accepted government auditing standards. (For further detail on our data analysis methodology, see app. I.) Though Medicare is the nation’s largest purchaser of managed care services, it lags other large purchasers in helping beneficiaries choose among plans. HCFA has responsibility for protecting beneficiaries’ rights and obtaining and disseminating information from Medicare HMOs to beneficiaries. HCFA has not yet, however, provided information to beneficiaries on individual HMOs. It has announced several efforts to develop HMO health care quality indicators. HCFA has, however, the capability to provide Medicare beneficiaries useful, comparative information now, using the administrative data it already collects. Unlike leading private and public health care purchasing organizations, Medicare does not provide its beneficiaries with comparative information about available HMOs. Other large purchasers of health care—for example, the Federal Employees Health Benefits Program, the California Public Employees’ Retirement System (CalPERS), Minnesota Medicaid, Xerox Corporation, and Southern California Edison—publish summary charts of comparative information such as available plans, premium rates, benefits, out-of-pocket costs, and member satisfaction surveys. Table 2.1 compares the information provided by HCFA and these other large health purchasers. A few purchasers also give enrollees information that helps them compare HMOs’ provision of services in such areas as preventive health and care of chronic illness. For example, CalPERS publishes the percentage of members in each plan who receive cholesterol screening, cervical and breast cancer screening, and eye exams for diabetics. Some purchasers also provide indicators of physician availability and competence, such as percentage of physicians accepting new patients, physician turnover, and percentage of physicians who are board certified. HCFA currently collects benefit and cost data in a standardized format from Medicare HMOs. HCFA’s professional staff use the data to determine that each HMO is providing a fairly priced package of Medicare services or that Medicare is paying a fair price for the services provided. HCFA could provide this benefit and cost information to beneficiaries with little additional effort. Using these data, HCFA’s regional office in San Francisco, on its own initiative, developed benefit and premium comparison charts 2 years ago for markets in southern and northern California, Arizona, and Nevada. However, distribution of these charts has been limited primarily to news organizations and insurance counselors. Beneficiaries may request the charts, but few do because HCFA does not widely publicize the charts’ existence. In fact, when we called a Los Angeles insurance counselor (without identifying ourselves as GAO staff) and asked specifically about Medicare HMO information, we were not told about the comparison charts. Recently, HCFA’s Philadelphia office began producing and distributing similar charts. While HCFA’s Office of Managed Care has been studying how to provide information to beneficiaries for a year and a half, the local initiatives in the San Francisco and Philadelphia offices demonstrate that HCFA could be distributing comparison charts to beneficiaries nationwide. Although HMOs provide beneficiaries information about benefits and premiums through marketing brochures, each plan uses its own terminology to describe benefits, premiums, and the rules enrollees must follow in selecting physicians and hospitals. Despite HCFA’s authority to do so, the agency does not require a standardized terminology or format for describing benefits. HCFA does review HMO marketing and informational materials to prevent false or misleading claims and to ensure that certain provider access restrictions are noted. HCFA has not ensured that HMO marketing materials are clear, however, because the agency does not require standard terminology or formats. For example, one plan’s brochure, to note its access restrictions, states that “. . . Should you ever require a specialist, your plan doctor can refer you to one” but never states that beneficiaries must get a referral before seeing a specialist. In addition, each HMO develops its own format to summarize its benefits and premiums. As a result, beneficiaries seeking to compare HMOs’ coverage of mammography services, for example, have to look under “mammography,” “X ray,” or another term, depending on the particular brochure. The length of some HMOs’ benefit summaries varies widely. For example, some brochures we received from the Los Angeles market, which has 14 Medicare HMOs, contain a summary of benefits spanning 14 pages; others have only a 1-page summary. Such diverse formats—without a comparison guide from HCFA—place the burden of comparing the HMOs’ benefits and costs exclusively on the beneficiary. To collect, distill, and compare HMO information would, in some markets, require substantial time and persistence (see figs. 2.1 and 2.2). First, beneficiaries would need to find and call a toll-free number to learn the names of available HMOs. This telephone number appears in the back of the Medicare handbook. However, the handbook generally is mailed to only those individuals turning age 65 or to beneficiaries who specially request it. Next, beneficiaries would have to contact each HMO to get benefit, premium, and provider network details. Finally, they would have to compare plans’ benefit packages and cost information without the benefit of standardized formats or terminology. This set of tasks is likely to be difficult for determined beneficiaries and may be too daunting for others. To test the difficulty of these tasks, we called all 14 Medicare HMOs in Los Angeles to request their marketing materials. After several weeks and follow-up calls, we had received information from only 10 plans. Some plans were reluctant to mail the information but offered to send it out with a sales agent. Declining visits from sales agents, we finally obtained the missing brochures by calling the HMOs’ marketing directors, identifying ourselves as GAO staff, and insisting that the marketing materials be mailed. The materials gathered show that beneficiaries in the Los Angeles market would have to sort through pounds of literature and compare benefits charts of 14 different HMOs. (See fig. 2.2.) Although HCFA has been studying ways to provide comparative benefits information nationwide since mid-1995, it has decided not to distribute printed information directly to beneficiaries. Instead, HCFA plans to make information on benefits, copayments, and deductibles available on the Internet. HCFA expects the primary users of this information to be beneficiary advocates, insurance counselors, and government entities— not beneficiaries. As of September 6, 1996, HCFA expected the information to be available electronically by June 1997—at the earliest. HCFA has a wealth of data, collected for program administration and contract oversight purposes, that can indicate beneficiaries’ relative satisfaction with individual HMOs. The data include statistics on beneficiary disenrollment and complaints. HCFA also collects other information that could be useful to beneficiaries, including HMOs’ financial data and reports from HCFA’s periodic monitoring visits to HMOs. As noted, however, HCFA does not routinely distribute this potentially useful information. Because of Medicare beneficiaries’ freedom to disenroll from managed care or change plans in any month, disenrollment data objectively measure consumer behavior toward and indicate their satisfaction with a specific HMO. Disenrollments may be more reliable than some other satisfaction measures—such as surveys—because disenrollment data do not depend on beneficiary recollection. Enrollment and disenrollment data, although collected primarily to determine payments to HMOs, can be used to construct several useful indicators of beneficiary satisfaction, such as the annual disenrollment rate: total number of disenrollees as a percentage of total enrollment averaged over the year, cancellation rate: percent of signed applications canceled before the “rapid” disenrollment rate: percent of new enrollees who disenroll within 3 “long-term” disenrollment rate: percent of enrollees who disenroll after 12 rate of return to fee for service: percent of disenrollees who return to traditional Medicare rather than enroll in another HMO, and retroactive disenrollment rate: percent of disenrollments processed retroactively by HCFA (typically done in cases of alleged beneficiary misunderstanding or sales agent abuse). Disenrollment rates that are high compared with rates for competing HMOs can serve as early warning indicators for beneficiaries, HMOs, and HCFA. (See ch. 3 for a discussion on interpreting these indicators and an analysis of disenrollment rates for HMOs serving the Miami and Los Angeles markets.) Disenrollment rates have already been used to help measure membership stability and enrollee satisfaction in the Health Plan Employer Data and Information Set (HEDIS), developed by large employers, HMOs, and HCFA under the auspices of the National Committee on Quality Assurance (NCQA). However, HEDIS’ measure of disenrollment behavior is limited to a single indicator—an annual disenrollment rate. HCFA could perform a more extensive analysis of the disenrollment data available now. The relative volume of beneficiary complaints about HMOs is another satisfaction indicator that HCFA could readily provide beneficiaries. HCFA regional staff routinely receive beneficiary complaints of sales abuses, the unresponsiveness of plans to beneficiary concerns, and other more routine service and care issues. Regardless of the type of complaint, a comparison of the number of complaints per 1,000 HMO members can give beneficiaries a view of members’ relative satisfaction with area HMOs. Although some HCFA regional offices already track complaints through the Beneficiary Inquiry Tracking System, HCFA has no plans to make these data consistent across regions or provide beneficiaries complaint volume information. HCFA could readily report on various HMO financial indicators. Large employers and HMOs have already incorporated several financial indicators—such as plans’ total revenue and net worth—into the current Health Plan Employer Data and Information Set (HEDIS 2.5). HEDIS 2.5 also requires HMOs to report the percentage of HMO revenues spent on medical services—known to insurers as the medical “loss ratio.” Xerox Corporation, for example, publicizes medical loss ratios to help employees compare the plans it offers. In addition, federal law establishes loss ratio standards for Medigap insurers. HCFA routinely collects financial information from HMOs in standard formats it jointly developed with the National Association of Insurance Commissioners in the early 1980s. HCFA uses these data to monitor contracts for compliance with federal financial and quality standards. HCFA could also report the results of periodic visits to verify HMO contract compliance in 13 separate dimensions, such as health services delivery, quality, and utilization management; treatment of beneficiaries in carrying out such administrative functions as marketing, enrollment, and grievance procedures; and management, administration, and financial soundness. After each visit, HCFA records any noncompliance with standards but does not make these reports public unless a Freedom of Information Act request is made. In contrast, NCQA, a leading HMO accreditation organization, has begun distributing brief summaries of its site visit reports to the public. NCQA’s summaries rate the degree of HMO compliance on six different dimensions, including quality management and improvement, utilization management, preventive health services, medical records, physician qualifications and evaluation, and members’ rights and responsibilities. HCFA has authority to obtain and distribute useful comparative data on health plans. Although HCFA is not now providing these data to beneficiaries and the marketplace, it is studying several future options, including joint efforts with the private sector. Eventually, these efforts could yield comparative plan information on satisfaction survey results, physician incentives, measures of access to care, utilization of services, health outcomes, and other aspects of plans’ operations. The following are examples of these efforts: HCFA is developing a standard survey, through HHS’ Agency for Health Care Policy and Research, to obtain beneficiaries’ perceptions of their managed care plans. This effort aims to standardize surveys and report formats to yield comparative information about, for example, enrollees’ experiences with access to services, interactions with providers, continuity of care, and perceived quality of care. HCFA has been developing regulations since 1990 to address financial incentives HMOs give their physicians. HCFA’s regulations, published in 1996 and scheduled to be effective beginning in January 1997, will require HMOs to disclose to beneficiaries, on request, the existence and type of any physician incentive arrangements that affect the use of services. HCFA is working with the managed care industry, other purchasers, providers, public health officials, and consumer advocates to develop a new version of HEDIS—HEDIS 3.0—that will incorporate measures relevant to the elderly population. It is also working with the Foundation for Accountability (FAcct) to develop more patient-oriented measures of health care quality. The HEDIS and FAcct initiatives are aimed at generating more direct measures of the quality of medical care and may require new data collection efforts by plans. These initiatives may eventually provide Medicare beneficiaries with objective information that will help them compare available plans. However, HCFA could do more to inform beneficiaries today. For this reason, we stress the importance of such measures as disenrollment rates, complaint rates, and results of monitoring visits, which can be readily generated from information HCFA routinely compiles. Public disclosure of disenrollment rates could help beneficiaries choose among competing HMOs and encourage HMOs to do a better job of marketing their plans and serving enrollees. Nonetheless, HCFA does not routinely compare plans’ disenrollment rates or disclose such information to the public. Because Medicare beneficiaries enrolled in HMOs can vote with their feet each month—by switching plans or returning to fee for service— comparing plans’ disenrollment rates can suggest beneficiaries’ relative satisfaction with competing HMOs. For this reason, we analyzed HCFA disenrollment data and found that Medicare HMOs’ ability to retain beneficiaries varies widely, even among HMOs in the same market. In the Miami area, for example, the share of a Medicare HMO’s total enrollment lost to voluntary disenrollment in 1995 ranged from 12 percent—about one in eight enrollees—to 37 percent—more than one in three enrollees. Although all HMOs experience some voluntary disenrollment, disenrollment rates should be about the same for all HMOs in a given market area if beneficiaries are about equally satisfied with each plan. An HMO’s disenrollment rate compared with other HMOs in the same market area, rather than a single HMO’s disenrollment rate, can indicate beneficiary satisfaction with care, service, and out-of-pocket costs. High disenrollment rates may result from poor education of enrollees during an HMO’s marketing and enrollment process. In this case enrollees may be ill informed about HMO provider-choice restrictions in general or the operation of their particular plan. High disenrollment rates may also result from beneficiaries’ dissatisfaction with access or quality of care. Alternatively, high disenrollment rates may reflect a different aspect of relative satisfaction—beneficiaries’ awareness that competing HMOs are offering better benefits or lower premiums. While statistics alone cannot distinguish among these causes, a relatively high disenrollment rate should caution beneficiaries to investigate further before enrolling. Medicare beneficiaries voluntarily disenroll from their HMOs for a variety of reasons: many who leave are dissatisfied with their HMOs’ service, but others leave for different reasons. A 1992 study reported that 48 percent of disenrollees from Medicare HMOs cited dissatisfaction as their reason for leaving, 23 percent cited a misunderstanding of HMO services or procedures, and 29 percent cited some other reason—such as a move out of the HMO’s service area. Some commonly cited reasons beneficiaries disenroll include dissatisfaction with the HMO’s provision of care, did not know had joined an HMO, did not understand HMO restrictions when joined, reached HMO’s annual drug benefit limit and enrolled in a different HMO for continued coverage of prescription drugs, attracted to competing HMO offering lower premiums or more generous moved out of HMO service area, and personal physician no longer contracts with HMO. Health plans’ retention of their members varies widely, as illustrated by our analysis of these rates for the Miami and Los Angeles markets. (See fig. 3.1 for the names of these HMOs and their associated Medicare products.) For some HMOs, disenrollment rates were high enough to raise questions about whether the HMO’s business emphasis was on providing health care or on marketing to new enrollees to replace the many who disenroll. The voluntary disenrollment rates of the seven plans active in the Miami market for all of 1995 varied substantially as measured by the percentage of an HMO’s average Medicare enrollment lost to disenrollment. (See fig. 3.2.) PCA Health Plan of Florida’s (PCA) disenrollment rate reached 37 percent; two other HMOs (HIP Health Plan of Florida (HIP) and CareFlorida) had disenrollment rates of 30 percent or higher. In contrast, Health Options had a disenrollment rate of 12 percent. The remaining five plans had a median disenrollment rate of about 17 percent. To keep total enrollment constant, HMOs must replace not only those members who leave voluntarily, but also those members who die. Thus, PCA had to recruit new enrollees equal in number to 41 percent of its membership just to maintain its total enrollment count. Percentage of Members in Plan The Los Angeles market, like Miami’s, showed substantial variation in HMOs’ disenrollment rates. (See fig. 3.3.) Los Angeles’ rates, in fact, varied slightly more than Miami’s. Foundation Health had the highest disenrollment rate (42 percent); Kaiser Foundation Health Plan (Kaiser) had the lowest (4 percent). Although reasons for disenrollment vary, beneficiaries who leave within a very short time are more likely to have been poorly informed about managed care in general or about the specific HMO they joined than those who leave after a longer time. Consequently, early disenrollment rates may better indicate beneficiary confusion and marketing problems than total disenrollment rates. Our analysis showed wide variation in plans’ early disenrollment rates. In our calculations we included both cancellations—beneficiaries who signed an application but canceled before the effective enrollment date—and “rapid disenrollment”—beneficiaries who left within 3 months of enrollment. In 1995, Medicare HMOs in the Miami market had cancellation rates of 3 to 8 percent, rapid disenrollment rates of 6 to 23 percent, and combined cancellations and rapid disenrollments of 9 to 30 percent. As figure 3.4 shows, nearly one in three beneficiaries who signed a CareFlorida application and more than one in five beneficiaries who signed a PCA application either canceled or left within the first 3 months. In contrast, only about 10 percent of Health Options’ and Prudential’s applicants left this early. Percentage of Beneficiaries Who Applied In 1995, Medicare HMOs in the Los Angeles market had cancellation rates of 1 to 7 percent, rapid disenrollment rates of 4 to 22 percent, and combined cancellations and rapid disenrollments of 5 to 29 percent. As figure 3.5 shows, a few Los Angeles plans lost beneficiaries at a rate significantly higher than the market average, and a few performed notably better than the market average. The broad middle group of plans lost between about 9 and 14 percent of new applicants before the 3-month time frame. The substantial variation in early disenrollments suggests that some HMOs do a better job than others of representing their plans to potential enrollees. Two 1991 HHS Office of Inspector General (OIG) studies support this idea. According to the studies, about one in four CareFlorida enrollees did not understand that they were joining an HMO, and one in four did not understand that they would be restricted to HMO physicians after they enrolled. In contrast, only about 1 in 25 Health Options enrollees failed to understand these fundamentals. OIG reported that CareFlorida’s disenrollment rates among beneficiaries enrolled less than a year were the highest in the Miami market for the federal fiscal years 1988 and 1989. This pattern persists, as our analysis of 1995 early disenrollment data shows. Complaints to HCFA regional offices of beneficiary confusion primarily fall into one of two categories: (1) mistaking the HMO application for a Medigap insurance application and (2) not understanding that HMO enrollees are restricted to certain providers. Confusion, whether the result of beneficiary ignorance of Medicare’s HMO option or intentional misrepresentation by HMO sales agents, exposes beneficiaries to unanticipated health expenses. Beneficiaries may also face months of uncertainty about their insured status and which specific providers they must see to have their health expenses covered. A typical complaint, according to HCFA staff, involves beneficiaries who find themselves enrolled in an HMO when they thought they were signing up for a Medicare supplemental policy. For example, in February 1995, a husband and wife signed an application for a South Florida HMO. They continued using their former physicians, who were not with the HMO, and incurred 17 separate charges in May 1995 for a knee replacement, including related services and a hospital stay. When Medicare denied payment, the couple found they were enrolled in the HMO. The HMO also denied payment, so the couple disenrolled, through the HMO, effective May 31. Still facing unpaid claims, they contacted HCFA in mid-June and complained that the sales agent had “talked real fast” and misrepresented the HMO plan as supplemental insurance. They allege he later told them they “didn’t read the fine print.” They complained that neither the government (Medicare) nor the sales agent explained the consequences of enrollment, and they would not have enrolled if they had known they would be giving up fee-for-service Medicare. In late July, HCFA retroactively disenrolled the couple and eventually paid their bills under fee-for-service Medicare. The HMO told HCFA that the sales agent had been terminated because of past concerns. Another leading category of complaints, according to HCFA staff, involves new HMO enrollees who do not understand HMO restrictions on access to care. In 1995, OIG reported that nearly one in four Medicare enrollees did not answer affirmatively when asked if they had a good knowledge from the beginning of how the HMO would operate; and one in four did not know they could appeal HMO denials of care they believe they are entitled to. Furthermore, 1 in 10 did not understand that they would need a referral from their primary care physician before they could see a specialist. The following complaint to HCFA about a Miami HMO illustrates beneficiary confusion over HMO restrictions. CareFlorida marketed its plan to an 81-year-old woman who subsequently enrolled in the plan effective February 1994, although she traveled regularly to a distant state. In her first months of membership, she visited her doctor, who was with the HMO. When she later visited a non-network physician who had also been her regular provider, Medicare denied her claims. She then requested to disenroll and told HCFA that if she had understood the requirement to visit specific providers, she would not have enrolled in the HMO. HCFA disenrolled the beneficiary from the plan effective with her use of non-network providers. This left her responsible for about $700 in out-of-plan charges. Other typical misunderstandings cited by HCFA staff and local insurance counselors include not understanding restrictions on access to specialists or other services nor restrictions to a specific medical group in an HMO’s provider network. Medicare regulations prohibit certain marketing practices, such as activities that mislead, confuse, or misrepresent; door-to-door solicitation; and gifts or payments used to influence enrollment decisions. These prohibitions are to help protect beneficiaries from abusive sales practices. Although HCFA staff could not measure the frequency of sales abuses, they expressed concern about continuing complaints of apparent abuses by sales agents. A recurring complaint, according to HCFA staff, is from beneficiaries whose signatures on enrollment forms are acquired under false pretenses. Many of these beneficiaries mistakenly believed that the form they signed—actually an enrollment form—was a request for more information or that it confirmed attendance at a sales presentation. In 1991, HCFA investigated the marketing practices of an HMO after receiving complaints and noting a high rate of retroactive disenrollments. The complaints alleged that sales agents were asking beneficiaries to sign a form indicating the agent had made a presentation. In fact, the document was an enrollment form. A recent case documented by HCFA staff is one in which at least 20 beneficiaries were inappropriately enrolled in an HMO after attending the same sales seminar in August 1995. The beneficiaries thought they were signing up to receive more information but later discovered the sales agent had enrolled them in the plan. In other cases, beneficiaries’ signatures were forged. In January 1995, for example, a beneficiary was notified by his medical group before an appointment that he was now enrolled in another plan. The beneficiary had no idea how this could be as he had not intended to change plans. Though the beneficiary signs with an “X,” the new enrollment application was signed with a legible cursive signature. HCFA re-enrolled the beneficiary into his former plan but took no action against the plan or sales agent. HCFA’s failure to take effective enforcement actions and to inform beneficiaries allows problems to persist at some HMOs. Historically, HCFA has been unwilling to sanction the HMOs it cites for violations found repeatedly during site monitoring visits. In 1988, 1991, and 1995, we reported on the agency’s pattern of ineffective oversight of HMOs violating Medicare requirements for marketing, beneficiary appeal rights, and quality assurance. Table 3.2 illustrates the weakness of HCFA’s responses in addressing one Florida HMO’s persistent problems. In the absence of HMO-specific performance indicators, beneficiaries joining this HMO have no way of knowing about its problem-plagued history spanning nearly a decade. Our reports show that this is not an isolated example. Disenrollment and complaint statistics can help identify HMOs whose sales agents mislead or fail to adequately educate new enrollees. However, HCFA does not routinely and systematically analyze these data. HCFA has uncovered problems with HMOs’ sales operations during routine visits to monitor contract compliance or when regional staff have noticed an unusual amount of complaints or disenrollments. The HHS OIG recently recommended that systematically developed disenrollment data be used in conjunction with surveys of beneficiaries to improve HCFA’s monitoring of HMOs. The OIG found that higher disenrollment rates correlated with higher beneficiary survey responses of poor service. Enrollees who said they got poor service and whose complaints were not taken seriously were more likely to come from HMOs with higher disenrollment rates. In contrast to the other surveyed HMOs, those with the five highest disenrollment rates were 1.5 times more likely to have beneficiaries report poor service (18 percent versus 12 percent). Although HCFA can identify HMOs with sales and marketing problems, it lacks the information to identify specific sales agents who might be at fault. HCFA does not routinely require HMOs to match disenrollment and complaint statistics to individual sales agents. In fact, HCFA made clear in 1991 that oversight standards for sales agents dealing with Medicare beneficiaries would be left largely to the states. States’ regulation and oversight of sales agents vary, although 32 states require HMO sales agents to be licensed. Representatives of the Florida Department of Insurance and its HMO monitoring unit said their oversight, beyond agent licensing, consisted of responding to specific complaints. One official commented that sales agents have to do something egregious to have their licenses revoked. HCFA’s HMO manual suggests specific practices that HMOs could employ to minimize marketing problems. These suggestions include verifying an applicant’s intent to enroll through someone independent of the sales agent, using rapid disenrollment data to identify agents whose enrollees have unusually high rates, and basing commissions and bonuses on sustained enrollment. HCFA staff said that some plans have implemented sales oversight like that suggested by HCFA, but others have not. Regional staff noted that plans are more likely to implement HCFA suggestions if they are trying to get approval for a contract application or service area expansion. Some HCFA regions have succeeded more than others in getting HMOs to improve their oversight of marketing agents. Publishing disenrollment data could encourage problem HMOs to reform their sales practices and more closely monitor their agents. Agents’ compensation often includes incentives such as commissions for each beneficiary they enroll. HMOs could structure their compensation to give agents a greater incentive to adequately inform beneficiaries about managed care in general and their plan in particular. For example, some HMOs pay commissions on the basis of a beneficiary’s remaining enrolled for a certain number of months. Several HMOs expressed concern that they did not know how their disenrollment rates compared with those of their competitors. Plan managers have told HCFA staff and us that comparative disenrollment information is useful performance feedback. Medicare HMOs do not compete on the basis of retention rates (low disenrollment rates) because these rates are not publicized. Publishing the rates would likely boost enrollment of plans with high retention rates and encourage plans with low retention rates to improve their performance. Millions of Medicare beneficiaries face increasingly complex managed care choices with little or no comparative information to help them. HCFA has not used its authority to provide comparative HMO information to help consumers, even though it requires standardized information for its internal use. As a result, information available to beneficiaries is difficult or impossible to obtain and compare. In contrast, other large purchasers—including the federal government for its employees—ease their beneficiaries’ decision-making by providing summary charts comparing plans. In addition, by not providing consumers with comparative information, Medicare fails to capitalize on market forces and complement HCFA’s regulatory approach to seeking good HMO performance. In an ideal market, informed consumers prod competitors to offer the best value. Without good comparative information, however, consumers are less able to determine the best value. HMOs have less incentive to compete on service to beneficiaries when satisfaction or other indicators of performance are not published. Wide distribution of HMO-specific disenrollment and other data could make Medicare’s HMO markets more like an ideal market and better ensure that consumers’ interests are served. HCFA could also make better use of indicators to improve its oversight of HMOs. By establishing benchmarks and measuring HMOs’ performance against them, HCFA could focus on plans whose statistics indicate potential problems—for example, on HMOs with high disenrollment rates. In August 1995, we recommended that the Secretary of HHS direct the HCFA Administrator to develop a new, more consumer-oriented strategy for administering Medicare’s HMO program. One specific recommendation called for HCFA to routinely publish (1) the comparative data it collects on HMOs and (2) the results of its investigations or any findings of noncompliance by HMOs. Although HCFA has announced plans to gather new data, it has no plans to analyze and distribute to beneficiaries the data on HMOs it currently collects. Therefore, we are both renewing our previous recommendations and recommending specific steps that the Secretary of HHS should take to help Medicare beneficiaries make informed health care decisions. The Secretary should direct the HCFA Administrator to require standard formats and terminology for important aspects of HMOs’ informational materials for beneficiaries, including benefits descriptions; require that all literature distributed by Medicare HMOs follow these produce benefit and cost comparison charts with all Medicare options available for each market area; and widely publicize the availability of the charts to all beneficiaries in markets served by Medicare HMOs and ensure that beneficiaries considering an HMO are notified of the charts’ availability. The Secretary should also direct the HCFA Administrator to annually analyze, compare, and distribute widely HMOs’ voluntary disenrollment rates, including cancellations, disenrollment within 3 months, disenrollment after 12 months, total disenrollment, retroactive disenrollment, and rate of return to fee for service; rate of inquiries and complaints per thousand enrollees; and summary results of HCFA’s monitoring visits. HHS agreed that “Medicare beneficiaries need more information and that informed beneficiaries can hold plans accountable for the quality of care.” HHS noted several HCFA initiatives that will eventually yield information to help beneficiaries choose plans right for their needs. We believe that these initiatives move in the right direction but that HCFA could do more for beneficiaries with information the agency already collects. The full text of HHS’ comments appears in appendix III. HHS outlined HCFA’s efforts to produce HMO comparison charts that will initially contain HMO costs and benefits and later may also include other plan-specific information—such as the results of HMOs’ satisfaction surveys. HCFA expects advocates and insurance counselors, not beneficiaries, to be the primary users of this information. HCFA plans to make the charts “available to any individual or organization with electronic access.” Information in an electronic form can easily be updated—a distinct advantage in a market that is evolving as quickly as Medicare HMOs. Providing the information in an electronic format, however, rather than in print, may make it less accessible to the very individuals who would find it useful. HHS noted that HCFA is developing the “National Managed Care Marketing Guideline,” partly in response to beneficiary complaints of confusion and misunderstanding caused by Medicare HMOs’ marketing practices. The guideline, to be implemented beginning in January 1997, will detail specific content areas to be covered in all Medicare HMO marketing materials. The guideline, as currently drafted, however, will not require standard formats or terminology and thus may not alleviate many of the difficulties beneficiaries now face when comparing HMOs’ marketing materials. Regarding our recommendation that disenrollment data be made available to beneficiaries, HHS stated that HCFA is evaluating different ways to express and present disenrollment rates. HHS cautioned that a careful analysis of disenrollment is necessary before meaningful conclusions can be drawn. We did not find such an analysis to be difficult or overly time consuming. Our recommendation is to publish disenrollment rates and let beneficiaries decide if, as we found in Los Angeles, a 42-percent annual disenrollment rate is meaningful in a market where competing HMOs have disenrollment rates of 4 percent. In short, HHS stated that HMO-specific information currently collected by HCFA could not be made publicly available until additional evaluation, data analysis, or development of data systems are complete. Even after this work is completed, however, the agency has no plans to distribute HMO-specific information directly to beneficiaries or ensure that they know such information is available. Thus, although HHS stated that one of HCFA’s highest priorities is that beneficiaries “receive timely, accurate, and useful information about Medicare,” HCFA has no plans to ensure that beneficiaries interested in HMOs receive any comparative information. | Pursuant to a congressional request, GAO reviewed the marketing, education, and enrollment practices of health maintenance organizations (HMO) participating in the Medicare risk-contract program, focusing on whether: (1) the Health Care Financing Administration (HCFA) provides Medicare beneficiaries with sufficient information about Medicare HMO; and (2) available HCFA data could be used to caution beneficiaries about HMO that perform poorly. GAO found that: (1) HCFA does not provide beneficiaries any of the comparative consumer guides that federal government and many employer-based health insurance programs routinely provide to their employees and retirees; (2) Medicare beneficiaries seeking similar information face a laborious, do-it-yourself process which includes calling to request area HMO names and telephone numbers, calling each HMO to request marketing materials, and attempting to compare plans from HMO brochures that may not use the same format or standardized terminology; (3) HCFA collects volumes of information that could be packaged and distributed to help consumers choose between competing Medicare HMO and also compiles data regarding HMO disenrollment rates, enrollee complaints, and certification results; (4) HCFA is developing comparison charts that will contain information on the benefits and costs for all Medicare HMO, but plans to post the charts in electronic format on the Internet rather than distribute them to beneficiaries; and (5) HCFA provision of information on HMO disenrollment rates may be particularly useful in helping beneficiaries to distinguish among competing HMO, since beneficiaries could then ask HMO representatives questions and seek additional information before making an enrollment decision. |
Internet-based services using Web 2.0 technology have become increasingly popular. Web 2.0 technologies refer to a second generation of the World Wide Web as an enabling platform for Web- based communities of interest, collaboration, and interactive services. These technologies include Web logs (known as “blogs”), which allow individuals to respond online to agency notices and other postings; social-networking sites (such as Facebook and Twitter), which also facilitate informal sharing of information among agencies and individuals; video-sharing Web sites (such as YouTube), which allow users to discover, watch, and share originally created videos; “wikis,” which allow individual users to directly collaborate on the content of Web pages; “podcasting,” which allows users to download audio content; and “mashups,” which are Web sites that combine content from multiple sources. While in the past Internet usage concentrated on sites that provide online shopping opportunities and other services, according to the Nielsen Company, today video and social networking sites have moved to the forefront, becoming the two fastest growing types of Web sites in 2009, with 87 percent more users than in 2003. Furthermore, in February 2009, usage of social networking services reportedly exceeded Web-based e-mail usage for the first time. Similarly, the number of American users frequenting online video sites has more than tripled since 2003. Some of the most popular Web 2.0 technologies in use today are social networking services, such as Facebook and Twitter. Facebook is a social networking site that lets users create personal profiles describing themselves and then locate and connect with friends, co-workers, and others who share similar interests or who have common backgrounds. According to the Nielsen Company, Facebook was the number one global social networking site in December 2009 with 206.9 million unique visitors. Twitter is a social networking and blogging site that allows users to share and receive information through short messages. According to the Nielsen Company, Twitter has been the fastest-growing social networking Web site in terms of unique visitors, increasing over 500 percent, from 2.7 million visitors in December 2008 to 18.1 million in December 2009. Federal agencies are increasingly using Web 2.0 technologies to enhance services and interactions with the public. Federal Web managers use these applications to connect to people in new ways. As of July 2010, we identified that 22 of 24 major federal agencies had a presence on Facebook, Twitter, and YouTube. Use of such technologies was endorsed in President Obama’s January 2009 memorandum promoting transparency and open government. The memorandum encouraged executive departments and agencies to harness new technologies to put information about their operations and decisions online so that it would be readily available to the public. It also encouraged the solicitation of public feedback to identify information of the greatest use to the public, assess and improve levels of collaboration, and identify new opportunities for cooperation in government. Table 1 presents examples of Web 2.0 technologies and their current uses in the federal government. Federal agencies have been adapting Web 2.0 technologies to support their individual missions. For example: ● The U.S. Agency for International Development (USAID) uses Facebook to inform the public about the developmental and humanitarian assistance that it is providing to different countries in the world. It also posts links to other USAID resources, including blogs, videos, and relevant news articles. ● The National Aeronautics and Space Administration (NASA) uses Twitter to notify the public about the status of its missions as well as to respond to questions regarding space exploration. For example, NASA recently posted entries about its Mars Phoenix Lander mission on Twitter, which included answers to questions by individuals who followed its updates on the site. ● The State Department uses YouTube and other video technology in supporting its public diplomacy efforts. The department posts YouTube videos of remarks by Secretary Clinton, daily press briefings, interviews of U.S. diplomats, and testimonies by ambassadors. It also conducted a global video contest that encouraged public participation. The department then posted the videos submitted to it on its America.gov Web site to prompt further online discussion and participation. ● The Transportation Security Administration (TSA) developed a blog to facilitate an ongoing dialogue on security enhancements to the passenger screening process. The blog provides a forum for TSA to provide explanations about issues that can arise during the passenger screening process and describe the rationale for the agency’s policies and practices. TSA also uses Twitter to alert subscribers to new blog posts. A program analyst in TSA’s Office of Strategic Communications and Public Affairs stated that blogging encourages conversation, and provides direct and timely clarification regarding issues of public concern. While the use of Web 2.0 technologies can transform how federal agencies engage the public by allowing citizens to be more involved in the governing process, agency use of such technologies can also present challenges related to privacy, security, records management, and freedom of information. Determining how the Privacy Act of 1974 applies to government use of social media. The Privacy Act of 1974 places limitations on agencies’ collection, disclosure, and use of personal information maintained in systems of records. The act describes a “record” as any item, collection, or grouping of information about an individual that is maintained by an agency and contains his or her name or another personal identifier. It also defines “system of records” as a group of records under the control of any agency from which information is retrieved by the name of the individual or by an individual identifier. However, because of the nature of Web 2.0 technologies, identifying how the act applies to the information exchanged is difficult. Some cases may be more clear-cut than others. For example, as noted by a participant discussing Web 2.0 challenges at a recent conference sponsored by DHS, the Privacy Act clearly applies to systems owned and operated by the government that make use of Web 2.0 technologies. Government agencies may also take advantage of commercial Web 2.0 offerings, in which case they are likely to have much less control over the systems that maintain and exchange information. For example, a government agency that chooses to establish a presence on a third party provider’s service, such as Facebook, could have limited control over what is done with its information once posted on the electronic venue. Given this limited control, key officials we interviewed said they are unsure about the extent to which personal information that is exchanged in such forums is protected by the provisions of the Privacy Act. Ensuring that agencies are taking appropriate steps to limit the collection and use of personal information through social media. Privacy could be compromised if clear limits are not set on how the government uses personal information to which it has access in social networking environments. Social networking sites, such as Facebook, encourage people to provide personal information that they intend to be used only for social purposes. Government agencies that participate in such sites may have access to this information and may need rules on how such information can be used. While such agencies cannot control what information may be captured by social networking sites, they can make determinations about what information they will collect and what to disclose. However, unless rules to guide their decisions are clear, agencies could handle information inconsistently. Individual privacy could be affected, depending upon whether and how government agencies collect or use personal information disclosed by individuals in interactive settings. Extending privacy protections to the collection and use of personal information by third party providers. Individuals interacting with the government via Web 2.0 media may provide personal information for specific government purposes and may not understand that the information may be collected and stored by third-party commercial providers. It also may not be clear as to whose privacy policy applies when a third party manages content on a government agency Web site. Accordingly, agencies may need to be clear about the extent to which they make use of commercial providers and the providers’ specific roles. Uncertainty about who has access to personal information provided through agency social networking sites could diminish individuals’ willingness to express their views and otherwise interact with the government. Safeguarding personal information from security threats that target Web 2.0 technologies. Federal government information systems have been targeted by persistent, pervasive, aggressive threats. In addition, as the popularity of social media has grown, they have increasingly been targeted as well. Thus as agencies make use of Web 2.0 technologies, they face persistent, sophisticated threats targeting their own information as well as the personal information of individuals interacting with them. The rapid development of Web 2.0 technologies makes it challenging to keep up with the constantly evolving threats deployed against them and raises the risks associated with government participation in such technologies. Further, the Federal Information Security Management Act states that agencies are responsible for the security of information collected or maintained on their behalf and for information systems used or operated on their behalf. The extent to which FISMA makes federal agencies responsible for the security of third-party social media Web sites may depend on whether such sites are operating their systems or collecting information on behalf of the federal government, which may not be clear. Training government participants on the proper use of social networking tools. Use of Web 2.0 technologies can result in a blending of professional and personal use by government employees, which can pose risks to their agencies. When an individual identifies him- or herself on a social media site as a federal employee, he or she provides information that may be exploited in a cyber attack on the agency. However, federal guidance may be needed for employees on how to use social media Web sites properly and how to handle personal information in the context of social media. In addition, training may be needed to ensure that employees are aware of agency policies and accountable for adhering to them. Determining requirements for preserving Web 2.0 information as federal records. A challenge associated with government use of Web 2.0 technologies, including government blogs and wikis and Web pages hosted by commercial providers, is the question of whether information exchanged through these technologies constitute federal records pursuant to the Federal Records Act. The National Archives and Records Administration (NARA) has issued guidance to help agencies make decisions on what records generated by these technologies should be considered agency records. According to the guidance, records generated when a user interacts with an agency Web site may form part of a set of official agency records. NARA guidance also indicates that content created with interactive software on government Web sites is owned by the government, not the individuals who created it, and is likely to constitute agency records and should be managed as such. Given these complex considerations, it may be challenging for federal agencies engaging the public via Web 2.0 technologies to assess the information they generate and receive via these technologies to determine its status as federal records. Establishing mechanisms for preserving Web 2.0 information as records. Once the need to preserve information as federal records has been established, mechanisms need to be put in place to capture such records and preserve them properly. Proper records retention management needs to take into account NARA record scheduling requirements and federal law, which requires that the disposition of all federal records be planned according to an agency schedule or a general records schedule approved by NARA. The records schedule identifies records as being either temporary or permanent and sets times for their disposal. These requirements may be challenging for agencies because the types of records involved when information is collected via Web 2.0 technologies may not be clear. For example, part of managing Web records includes determining when and how Web “snapshots” should be taken to capture the content of agency Web pages as they existed at particular points in time. Business needs and the extent to which unique information is at risk of being lost determine whether such snapshots are warranted and their frequency. NARA guidance requires that snapshots be taken each time a Web site changes significantly; thus, agencies may need to assess how frequently the information on their sites changes. Comments by individuals on agency postings may need to be scheduled in addition to agency postings. In the case of a wiki, NARA guidance requires agencies to determine whether the collaborative wiki process should be scheduled along with the resulting final product. In addition, because a wiki depends on a collaborative community to provide content, agencies are required to make determinations about how much content is required to make the wiki significant or “authoritative” from a record perspective. The potential complexity of these decisions and the resulting record-keeping requirements and processes can be daunting to agencies. Ensuring proper adherence to the requirements of FOIA. Federal agencies’ use of Web 2.0 technologies could pose challenges in appropriately responding to FOIA requests. Determining whether Web 2.0 records qualify as “agency records” under FOIA’s definition is a complex question. FOIA’s definition focuses on the extent to which the government controls the information in question. According to the Department of Justice’s FOIA guidance, courts apply a four-part test to determine whether an agency exercises control over a record. They examine: (a) who created the record and the intent of the record creator; (b) whether the agency intended to relinquish control; (c) the agency’s ability to use or dispose of the record; and (d) the extent to which the record is integrated into the agency’s files. Agency “control” is also the predominant consideration in determining whether information generated or maintained by a government contractor is subject to FOIA’s requirements. Given the complexity of these criteria, agencies may be challenged in making appropriate FOIA determinations about information generated or disseminated via Web 2.0 technologies. If not handled properly, such information may become unavailable for public access. As federal agencies have increasingly adopted Web 2.0 technologies, often by making use of commercially provided services, information technology officials have begun to consider the array of privacy, security, records management, and freedom of information issues that such usage poses. Once these issues are understood, measures can then be developed and implemented to address them. Several steps have been taken to identify these issues and to begin developing processes and procedures to address them: In June 2009, DHS hosted a two-day public workshop to discuss ● leading practices for the use of social media technologies to further the President’s Transparency and Open Government Initiative. The workshop consisted of panels of academic, private-sector, and public-sector experts and included discussions on social media activities of federal agencies and the impact of those activities on privacy and security. In November 2009, DHS released a report summarizing the findings of the panels and highlighting potential solutions. According to a DHS official involved in coordinating the workshop, the array of issues raised during the workshop—which are reflected in the challenges I have discussed today—remain critically important to effective agency use of Web 2.0 technologies and have not yet been fully addressed across the government. ● NARA has issued guidance outlining issues related to the management of government information associated with Web 2.0 use. The agency recently released a brief document, Implications of Recent Web Technologies for NARA Web Guidance, as a supplement to its guidance to federal agencies on managing Web-based records. The document discusses Web technologies used by federal agencies—including Web portals, blogs, and wikis—and their impact on records management. NARA officials recognize that the guidance does not fully address more recent Web 2.0 technologies, and they said the agency is currently conducting a study of the impact of those technologies and plans to release additional guidance later this year. ● that it had negotiated terms-of-service agreements with several social networking providers, including Facebook, MySpace, and YouTube. The purpose of these agreements was to provide federal agencies with standardized vehicles for engaging these providers and to resolve legal concerns raised by following the terms and conditions generally used by the providers, which posed problems for federal agencies, including liability, endorsements, advertising, and freedom of information. As a result, other federal agencies can take advantage of these negotiated agreements when determining whether to use the providers’ services. In April 2009, the General Services Administration announced ● The Office of Management and Budget (OMB), in response to President Obama’s January 2009 memorandum promoting transparency and open government, recently issued guidance intended to (1) clarify when and how the Paperwork Reduction Act of 1995 (PRA) applies to federal agency use of social media and Web-based interactive technologies; and (2) help federal agencies protect privacy when using third-party Web sites and applications. Specifically, a memo issued in April 2010 explained that certain uses of social media and web-based interactive technologies would not be treated as “information collections” that would otherwise require review under the PRA. Such uses include many uses of wikis, the posting of comments, the conduct of certain contests, and the rating and ranking of posts or comments by Web site users. It also states that items collected by third party Web sites or platforms that are not collecting information on behalf of the federal government are not subject to the PRA. In addition, a memorandum issued by OMB in June 2010 called for agencies to provide transparent privacy policies, individual notice, and a careful analysis of the privacy implications whenever they choose to use third-party technologies to engage with the public. The memo stated—among other things—that prior to using any third-party Web site or application, agencies should examine the third-party’s privacy policy to evaluate the risks and determine whether it is appropriate for agency use. Further, if agencies post links on their Web sites that lead to third-party Web sites, they should notify users that they are being directed to non-government Web sites that may have privacy policies that differ from the agency’s. In addition, the memo required agencies to complete a privacy impact assessment whenever an agency’s use of a third- party Web site or application gives it access to personally identifiable information. In summary, federal agencies are increasingly using Web 2.0 technologies to enhance services and interactions with the public, and such technologies have the potential to transform how federal agencies engage the public by allowing citizens to become more involved in the governing process and thus promoting transparency and collaboration. However, determining the appropriate use of these new technologies presents new potential challenges to the ability of agencies to protect the privacy and security of sensitive information, including personal information, shared by individuals interacting with the government and to the ability of agencies to manage, preserve, and make available official government records. Agencies have taken steps to identify these issues and begun developing processes and procedures for addressing them. Until such procedures are in place, agencies will likely continue to face challenges in appropriately using Web 2.0 technologies. We have ongoing work to assess these actions. Mr. Chairman, this concludes my statement. I would be happy to answer any questions you or other Members of the Subcommittee may have. If you have any questions regarding this testimony, please contact Gregory C. Wilshusen at (202) 512-6244 or [email protected]. Other individuals who made key contributions include John de Ferrari (Assistant Director), Sher`rie Bacon, Marisol Cruz, Susan Czachor, Fatima Jahan, Nick Marinos, Lee McCracken, David Plocher, and Jeffrey Woodward. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | "Web 2.0" technologies--such as Web logs ("blogs"), social networking Web sites, video- and multimedia-sharing sites, and "wikis"--are increasingly being utilized by federal agencies to communicate with the public. These tools have the potential to, among other things, better include the public in the governing process. However, agency use of these technologies can present risks associated with properly managing and protecting government records and sensitive information, including personally identifiable information. In light of the rapidly increasing popularity of Web 2.0 technologies, GAO was asked to identify and describe current uses of Web 2.0 technologies by federal agencies and key challenges associated with their use. To accomplish this, GAO analyzed federal policies, reports, and guidance related to the use of Web 2.0 technologies and interviewed officials at selected federal agencies, including the Department of Homeland Security, the General Services Administration, and the National Archives and Records Administration. Federal agencies are using Web 2.0 technologies to enhance services and support their individual missions. Federal Web managers use these applications to connect to people in new ways. As of July 2010, we identified that 22 of 24 major federal agencies had a presence on Facebook, Twitter, and YouTube. Several challenges in federal agencies' use of Web 2.0 technologies have been identified: (1) Privacy and security. Agencies are faced with the challenges of determining how the Privacy Act of 1974, which provides certain protections to personally identifiable information, applies to information exchanged in the use of Web 2.0 technologies, such as social networking sites. Further, the federal government may face challenges in determining how to appropriately limit collection and use of personal information as agencies utilize these technologies and how and when to extend privacy protections to information collected and used by third-party providers of Web 2.0 services. In addition, personal information needs to be safeguarded from security threats, and guidance may be needed for employees on how to use social media Web sites properly and how to handle personal information in the context of social media. (2) Records management and freedom of information. Web 2.0 technologies raise issues in the government's ability to identify and preserve federal records. Agencies may face challenges in assessing whether the information they generate and receive by means of these technologies constitutes federal records and establish mechanisms for preserving such records, which involves, among other things, determining the appropriate intervals at which to capture constantly changing Web content. The use of Web 2.0 technologies can also present challenges in appropriately responding to Freedom of Information Act (FOIA) requests because there are significant complexities in determining whether agencies control Web 2.0-generated content, as understood within the context of FOIA. Federal agencies have begun to identify some of the issues associated with Web 2.0 technologies and have taken steps to start addressing them. For example, the Office of Management and Budget recently issued guidance intended to (1) clarify when and how the Paperwork Reduction Act of 1995 applies to federal agency use of social media and Web-based interactive technologies; and (2) help federal agencies protect privacy when using third-party Web sites and applications. |
In many cases, contamination on idle or underused industrial sites—brownfields—is not identified until the sites are sold or an environmental accident—such as a toxic substance seeping into drinking water—occurs. Once contamination is identified, federal and state environmental laws and regulations impose potentially broad pollution cleanup liability. For example, under the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA), commonly known as Superfund, past or present owners of a site containing hazardous substances may be liable for cleanup costs. Also, each party responsible for cleanup costs may be held liable under CERCLA for the entire cost of the cleanup. While the Environmental Protection Agency’s (EPA) policy is to place only the worst sites on its National Priorities List for cleanup under Superfund, federal environmental laws—including liability standards—still apply to sites with lower-level contamination. This report explores issues related to redeveloping brownfield sites with lower-level contamination that are not on the National Priorities List. We collected information on state and local initiatives in Boston, Massachusetts; Union County, New Jersey; Chicago, Illinois; and Pittsburgh, Pennsylvania, because these cities were identified by EPA officials and brownfield researchers as having active site reuse programs. While the precise magnitude and severity of brownfields is unknown because there is no national inventory, the cities we visited had hundreds of acres of brownfields. In trying to redevelop brownfields, local governments and community organizations have faced reluctance on the part of lenders and developers who fear having to pay for costly environmental cleanups. To overcome this obstacle and others and to speed redevelopment, state and local governments have created a variety of initiatives. State and local governments have estimated that they have thousands of vacant industrial properties that could be redeveloped. In 1987, we estimated that anywhere from about 130,000 to over 425,000 sites throughout the nation contain some contamination. This estimate includes many vacant industrial sites. Our visits to the four states confirmed the existence of numerous former industrial sites that were once productive but now sit abandoned and probably contaminated: The state of Illinois has estimated that 5,000 abandoned or inactive industrial/commercial sites exist throughout the state. In Chicago alone, an estimated 18 percent of the industrial acreage is unused. This estimate includes 1,500 acres spread among 2,000 sites. One Boston neighborhood, located around Dudley Street, covers just 1-1/2 square miles but has within its boundaries 54 state-identified hazardous waste sites. A regional planning group study of Union County, New Jersey, identified 185 separate sites containing more than 2,500 acres of reusable land in the county, all zoned for commercial or industrial development. Towns throughout the Monongahela Valley in Pennsylvania, once a major steel-making center, contain hundreds of acres of land filled with vacant steel mills and other manufacturing facilities. As states and localities attempt to redevelop their abandoned industrial sites, they have faced several obstacles, including the possibility of contamination and the associated liability for cleanup. This situation is caused largely by federal and state environmental laws and court decisions that impose or imply potentially far-reaching liability. The uncertain liability has encouraged businesses to build in previously undeveloped nonurban areas—called “greenfields”—where they feel more confident that no previous industrial use has occurred. Lenders, environmental attorneys, local officials, and community development officials in the areas we visited and the documents we reviewed reported that the general uncertainty about the costs of environmental cleanup and who will pay those costs has delayed the redevelopment of industrial properties. A lending official with a large Pittsburgh-based bank, for example, stated that little redevelopment has occurred on the former steel mill sites because of environmental concerns. In some cases, the bank has chosen not to foreclose on properties because it does not want to assume cleanup and associated liabilities. Furthermore, some owners have preferred to keep properties idle rather than sell them and take the risk that the environmental assessments required upon sale will detect contamination that they will have to clean up. A January 1995 EPA action agenda on brownfields stated that the fear of contamination and its associated liability has left many investors wary of buying properties that may be contaminated and is enough to stop real estate transactions from moving forward. In its local strategic plan, EPA’s Chicago Regional Office further concluded that lenders are often unwilling to provide loans for property that could be contaminated because they are concerned about their own liability, the reduced collateral value of the land if it is found to be contaminated, and the ability of the property owners to repay a loan if they must also pay for a major cleanup. A variety of interest groups has also concluded that the potentially large and uncertain liability thwarts efforts to revitalize communities. For example, the U.S. Conference of Mayors has adopted the brownfield issue as one of five priority areas and has publicly endorsed EPA’s efforts to reduce the fear of and uncertainty about cleanup liability. The National Association for the Advancement of Colored People testified before the Congress in June 1994 that liability concerns have impeded the efforts of communities to clean up brownfield sites. Furthermore, the Mortgage Bankers Association of America has concluded that the redevelopment of potentially viable properties has been obstructed by concerns in the commercial real estate market that lenders will be held liable for environmental contamination that they did not cause. Rather than face the uncertain liability and potential delays associated with an old industrial site, businesses have looked to greenfields—previously undeveloped sites in rural and suburban areas—for expansion and new development. This trend, according to a regional EPA official, has contributed to suburban sprawl and leads to increased congestion and air pollution. Furthermore, such development requires the construction of new infrastructure and results in reduced tax bases and employment in traditional urban centers, according to state officials and community development practitioners. In addition to the fear of and uncertainty about the costs of environmental cleanups, other factors have also contributed to the slow pace of brownfields’ redevelopment. City and state officials and community development practitioners told us that, often, unused industrial sites have infrastructure weaknesses (e.g., poor transportation access), are perceived to be areas of high crime, and have a general unattractiveness that reduce their redevelopment potential. Wanting to revitalize their communities and yet fearing environmental cleanups, state and local governments and community groups have responded with a variety of initiatives. These efforts address those state laws and regulations that appear to hinder redevelopment. For example, some of the provisions provide covenants not to sue so that innocent purchasers are protected from liabilities, some clarify the lender’s liability, and others seek to streamline the states’ regulatory processes. A few even provide seed money and loans for cleanup and redevelopment. In Massachusetts, for example, the legislature changed environmental laws to make it clear that a lender does not automatically become liable for environmental cleanup when it forecloses on property, according to state officials. The state law also authorizes state officials to take into account future uses of the site and surrounding areas in determining the appropriate cleanup level. And, among other things, for economically distressed target areas, under a pilot program Massachusetts will provide a covenant to new property owners: The state will not sue new owners who have followed the procedures of the state’s voluntary cleanup program. This provision, it is hoped, will reduce some property owners’ and lenders’ fear of liability for contamination identified in the future. New Jersey recently made some similar legislative changes with the Industrial Sites Recovery Act and the Lender Liability Act. One component is a $55 million hazardous site remediation fund to provide grants and low-interest loans for assessing and cleaning up sites. Also, the state participated in a model industrial site redevelopment project in Union County that identified numerous sites having less contamination and more development potential than most officials had thought. Local governments and neighborhood groups, working with other stakeholders, have also been trying to overcome obstacles and spur redevelopment. For example, officials in Chicago have recognized that if cleanup is not coupled with redevelopment, sites are likely to be recontaminated through illegal dumping. The city has worked closely with state and federal environmental protection agencies in assessing and cleaning up five demonstration brownfield sites. The project has received $2 million in city funds for the sites, several of which have specific redevelopment plans. In Boston, the Dudley Street neighborhood has been working to overcome the negative impact of years of industrial contamination. A community group, with the help of city officials, was recently successful in getting a private developer to build a supermarket and shopping center on a large former industrial tract. Not only does this shopping center provide essential services for community residents, but its success has caused adjacent vacant lots to become more economically viable. As state and local governments have shown increased interest in redeveloping their industrial sites, several federal agencies have begun to help them. Both EPA and EDA have gained practical experience through redevelopment activities at several sites, while HUD has started a series of projects to carry out its brownfield strategy. In addition, the agencies have begun to coordinate their efforts and sponsor joint projects. While maintaining its chief focus on the National Priorities List, EPA has in recent years become more involved with state and local governments in efforts to redevelop less contaminated industrial sites. In January 1995, the agency announced a multifaceted action agenda on brownfields, which includes a variety of ongoing, enhanced, and new initiatives. A major element of EPA’s agenda is the demonstration pilots funded under the Brownfields Economic Redevelopment Initiative. The main intent of these demonstrations, according to EPA, is to learn how environmental hurdles can be overcome and urban communities restored. The first major project started with the State of Ohio and Cuyahoga County (Cleveland) in November 1993. EPA contributed $200,000, which the county used to identify contaminated areas for cleanup and redevelopment. According to the Cuyahoga County Planning Commission, the project has generated $625,000 in new tax revenues and resulted in 100 new jobs. The project also includes plans to consult with communities surrounding these sites to help decide on future uses. Two more cities, Richmond, Virginia, and Bridgeport, Connecticut, were selected as demonstration projects in 1994, and EPA expects to select 47 more locations by 1996. EPA plans to work closely with EDA to make the transition from the cleanup to the redevelopment stage of its demonstration projects. Another item on EPA’s agenda was its announcement that it has removed from its data base of potentially contaminated sites about 25,000 sites where the agency planned to take no further remedial action. According to EPA, many of these sites either were not contaminated, had already been cleaned up under state programs, or were being cleaned up; still, potential developers were reluctant to get involved with them because they remained on EPA’s list. To further reduce the stigma associated with these sites, EPA officials planned an outreach program to inform interested parties about the true status of a purchaser’s federal liability in each case. To assist in removing liability barriers, the action agenda calls for EPA to develop a package of reforms to limit liability for brownfield sites. As part of this package, EPA is developing guidance that is intended to expand the circumstances under which the agency will agree not to hold prospective purchasers liable for preexisting contamination on a property. In addition, EPA plans to issue guidance explaining its policy of not pursuing lenders for cleanup costs. EPA is also working to clarify municipal liability so that local governments will be encouraged to start the cleanup process without concern for liability under Superfund. Aside from the brownfield activities led by EPA’s headquarters offices, several regional offices have formed partnerships with local governments to work on industrial site redevelopment issues. EPA’s Region 5 office in Chicago, for example, has developed a strategy aimed at developing partnerships with key stakeholders, encouraging voluntary cleanups, promoting broad community participation in the cleanup processes, and disseminating information to prospective purchasers and lenders involved in brownfield sites. EPA has also loaned staff to local governments to further assist efforts to redevelop brownfields. EDA’s involvement in industrial sites’ redevelopment has two primary aspects: The agency, according to its environmental officer, has had direct experience in cleaning up and developing its own properties, and it has sponsored projects to educate and inform state and local entities about redevelopment issues. The agency’s direct experience stems largely from loans that EDA guaranteed in the 1970s and early 1980s to improve industrial facilities. When several borrowers defaulted on the loans, EDA acquired title to the sites and was thus faced with the responsibility for cleaning them up before they could be sold and redeveloped. The sites, which include a 176-acre steel mill in southeast Chicago and a 22-acre foundry in Two Harbors, Minnesota, have undergone environmental assessments and are now in the cleanup phase. EDA officials have used this practical experience to help communities as they attempt to redevelop their industrial sites. The agency has provided, among other things, funds for independent research into the issues related to reusing industrial buildings. EDA has awarded a grant to develop and publish a booklet aimed at helping communities deal with their abandoned industrial sites. In addition, EDA has developed a cooperative relationship with EPA on its pilot initiative concerning brownfields, which has included providing help in selecting projects and assisting EPA on technical matters. While HUD has become active in brownfield issues relatively recently, it has developed a strategy with several ongoing and planned components. The Department’s Empowerment Zone and Enterprise Community program may provide, among other things, opportunities for the agency to learn and disseminate information on how selected communities deal with issues related to reusing industrial sites. And in addition to its own initiatives, HUD has formed a cooperative relationship with EPA to pursue research and other mutually beneficial objectives. One of HUD’s first major activities in brownfield issues was a December 1994 conference on “The Relationship Between Environmental Protection and Opportunities for Inner-City Economic Development.” The meeting, attended by a wide variety of federal, state, and local officials, researchers, and community development practitioners, was aimed at advising and informing HUD on programs’ obstacles and policy options associated with reusing industrial sites. In 1994, almost 300 communities applied for six federal Urban Empowerment Zone and 65 Enterprise Community designations that provide tax incentives. Empowerment Zones also provide other benefits to businesses that locate in these economically distressed communities. Several cities that received designations in late 1994 included industrial and commercial sites’ redevelopment as part of their Empowerment Zone strategies: Chicago cited its own brownfield program as an element of its revitalization plan and listed several “environmental waivers” that could speed the cleanup and redevelopment of sites in the zone. Boston, which contains an Enhanced Enterprise Community, proposed a strategy including plans to redevelop a 175-acre former hospital site and create a center for emerging industries at the site of a former computer-manufacturing facility. For the two-state Empowerment Zone contained in Philadelphia/Camden there is a plan to clean up and redevelop a former oil company site with help from Pennsylvania’s program to clean up industrial sites. Another important brownfield project, according to HUD officials, is a research project sponsored jointly with EPA. Although the project started out with HUD, the two agencies have since combined resources and plan to contract for a study that will explore the reasons why businesses locate in certain areas. The study is designed to provide knowledge that will be useful to both agencies as they look for ways to help communities redevelop industrial sites. HUD officials also told us that brownfield issues are mentioned specifically in two major initiatives: HUD’s own plan to transform or reinvent itself and a strategy announced in March 1995 targeted to achieving environmental justice. In the reinvention plan, HUD proposes to consolidate its grants for community economic development into a single Community Opportunity Fund. A bonus pool in this program would be used to give good performers the opportunity to compete for additional funds for large-scale job creation projects and environmental cleanup of brownfield sites. HUD’s environmental justice plan, which is part of a larger strategy approved by the President, designates brownfields’ redevelopment as one of four priority initiatives. We requested comments on a draft of this report from EPA, the Department of Commerce, and HUD. We met with the Director for Outreach and Special Projects Staff in the Office of Solid Waste and Emergency Response, EPA; and the Director of the Building and Technology Division in the Office of Policy Development and Research, HUD, to discuss their agencies’ comments on our report. EPA and HUD generally agreed with the information provided in the report; however, both agencies said that they had made substantial recent progress on brownfield issues. We incorporated information that EPA and HUD provided us about their new initiatives into the report where appropriate. The Department of Commerce, in written comments that are contained in appendix I of this report, suggested that we include additional information on EDA’s initiatives. In response, we added to our report information about EDA’s current activities and partnership with EPA. We did not address several other issues raised in the comments—such as rural brownfields and existing businesses’ relocations—because the issues were beyond the scope of this assignment. To determine what is known about the extent and nature of abandoned industrial sites in distressed urban communities and the barriers that brownfields present to redevelopment efforts, we reviewed previous GAO reports on Superfund issues and other reports on the subject, such as the Northeast-Midwest Institute’s report entitled New Life For Old Buildings and Resources for the Future’s report entitled The Impact of Uncertain Environmental Liability on Industrial Real Estate Development. To find out about state and local initiatives, we visited Boston, Massachusetts; Union County, New Jersey; Chicago, Illinois; and Pittsburgh, Pennsylvania, because they were identified by EPA and brownfield researchers as having active site reuse programs. While there, we obtained information from directors of state and local government environmental and community development efforts, environmental attorneys, developers, and community development practitioners, such as those at the Jamaica Plain Neighborhood Development Corporation in Boston and Bethel New Life, Inc., in Chicago. We also interviewed public interest group officials, including the Directors of the Coalition for Low Income Community Development, the National Council for Urban Economic Development, and the Urban Land Institute and researchers and analysts at the Northeast-Midwest Institute, the Environmental Defense Fund, and Resources for the Future to obtain their perspectives on the issue. To provide information on federal initiatives aimed at helping communities overcome obstacles to reusing brownfield sites, we discussed brownfield programs and issues at three federal agencies—EPA, HUD, and the Department of Commerce—that were identified by public interest group, state government, or local government officials as having brownfield programs. We interviewed EPA’s Director of Outreach and Special Projects, Office of Solid Waste and Emergency Response, and her staff; HUD’s Director of the Building and Technology Division in the Office of Policy Development and Research and the Director, Office of Block Grant Assistance, and their staffs; and the environmental officer and staff in the Department of Commerce’s EDA’s Office of Research and Technical Assistance. We also reviewed programs’ guidance, policy statements, and reports on the programs at these agencies. Finally, we also contacted officials at other federal agencies, such as the Small Business Administration, the Department of Agriculture’s Farmer’s Home Administration, and the Department of Transportation, to determine whether they had any initiatives under way. We conducted our review between November 1994 and May 1995 in accordance with generally accepted government auditing standards. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 14 days from the date of this letter. At that time, we will send copies to the appropriate congressional committees and subcommittees, the Secretaries of HUD and Commerce, the Administrator of EPA, and the Director of the Office of Management and Budget. We will also make copies available to others on request. If you would like additional information on this report, please call me at (202) 512-7631. Erin Bozik, Assistant Director Wendy Bakal Susan Beekman Frank Putallaz Tom Repasch The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO provided information on brownfields, focusing on: (1) the extent and nature of abandoned industrial sites in distressed urban communities and the barriers brownfields present to redevelopment efforts; and (2) federal initiatives aimed at helping communities overcome obstacles to reusing brownfield sites. GAO found that: (1) while no national inventory of brownfield sites exist, states have identified thousands of former industrial sites that are abandoned and possibly contaminated; (2) although brownfield sites are usually not contaminated enough to qualify for the Superfund Program, many offer great potential for redevelopment; (3) although developers and lenders have been reluctant to get involved with brownfields due to uncertain liability, governments have created initiatives, such as offering loans and liability protection, to speed up redevelopment efforts; (4) brownfield redevelopment has remained state and local in nature, but federal agencies have begun assisting local governments to reclaim sites; (5) the Environmental Protection Agency has provided demonstration grants to help redevelop industrial properties that were not contaminated or had been cleaned up; (6) the Economic Development Administration has provided financial support for brownfield research and has also acquired practical experience from cleaning up properties it acquired through loan defaults; and (7) the Department of Housing and Urban Development is implementing several brownfield projects through its Empowerment Zone and Enterprise Community program. |
Under our voluntary tax system, taxpayers are responsible for filing tax returns that report the full amount of taxes owed (referred to as self- assessment of taxes) as well as pay any taxes that are due. IRS has established eight major compliance and collection programs to check on taxpayer compliance with these responsibilities and to initiate collection action if payment is not received. A descriptive overview of these compliance and collection programs is shown in figure 1. (A detailed description appears in table 4 in app. I.) In general, the compliance programs were designed to assure that taxpayers fully and accurately report and pay the amount of taxes that they owe to IRS. As shown in figure 1, IRS’s compliance checks begin when taxpayers file their tax returns. As returns are received and processed, they are checked for errors (e.g., math errors and omitted schedules) and unpaid balances. After processing, a tax return may also be selected for review by other compliance programs. Two of these compliance programs use computers to analyze information available to IRS (e.g., earnings on bank deposits) to detect taxpayers who have not filed tax returns or taxpayers who have underreported the amount of taxes owed. IRS may also audit the tax returns filed by individuals, corporations, and others, such as estates, to determine whether the correct tax has been reported and paid. At this point in the compliance process, taxpayers may be asked for records to substantiate their returns. If the compliance programs identify unpaid taxes, IRS makes the tax assessments and requests the taxpayers to make the appropriate payment. If payment is not received, IRS sends a series of collection notices to taxpayers demanding payment of the assessment. If taxpayers become delinquent—if they do not pay their taxes after being sent collection notices—IRS may initiate collection action through its telephone and field collection programs. In addition to requesting payment from delinquent taxpayers, these programs research the taxpayers’ ability to pay their tax debts and may use sanctions, including levies, liens, and seizures, to obtain payment. More-complex unpaid assessments are referred from telephone collection to field collection. Beginning in fiscal year 2001, IRS reorganized into four operating divisions, each responsible for administering tax law for a set of taxpayers with similar needs. By reorganizing in this manner, IRS sought to establish clearer lines of responsibility and accountability for improving service to taxpayers and resolving their tax problems. Through such improvements, IRS expected to better enable taxpayers to comply with the tax laws. The two largest divisions in terms of staff and number of taxpayers covered, and the primary focus of this report, are the small business division and the wage and investment division. The small business division is responsible for individuals who are fully or partially self-employed and for businesses with assets up to $10 million. The wage and investment division is responsible for individuals who are not self-employed (e.g., wage earners). The other two divisions are responsible for large and midsized businesses and for tax-exempt and government entities. In general, the IRS operating divisions are responsible for managing the daily operations of the eight major compliance and collection programs, as appropriate for their taxpayers. In some instances, however, the programs are consolidated in one division or two divisions. For example, the field collection program is housed within the small business division and the telephone collection program is split between the small business division and the wage and investment division. The operations of the compliance and collection programs differ from each other in many respects. Some of the programs (e.g., returns processing and underreporter programs) rely on automation and deal with millions of taxpayers. Some (e.g., corporate audit) are highly labor intensive and deal with far fewer taxpayers. Others (e.g., the nonfiler program) are a combination of automated programs and labor intensive investigations. Although day-to-day management of IRS’s compliance and collection programs is the responsibility of the operating divisions, the commissioner and his senior management team maintain responsibility for making decisions on major operational changes, allocating resources within IRS, and developing agencywide strategic plans. The process for making these decisions starts with the operating divisions’ preparing strategic assessments that report on major trends, issues, and problems facing the divisions and proposals for dealing with them. These decisions are subject to public oversight. The IRS Restructuring and Reform Act of 1998 (IRS Restructuring Act) established an IRS oversight board, in part to assist Congress in reviewing and approving IRS’s budget and strategic planning decisions. Overall, our analysis showed significant and pervasive declines in IRS’s compliance and collection programs, as measured by indicators such as those covering staffing, work completed, and work outcomes from fiscal year 1996 to fiscal year 2001. Moreover, an increasing gap between collection workload, stemming from assessments made by compliance programs, and collection case closures has led IRS to defer taking action to collect on billions of dollars of tax delinquencies. A number of factors have contributed to these declines. These factors include decreases in overall staffing, decreases in compliance and collection staffing, decreased productivity of the remaining compliance and collection staff, increased compliance and collection procedural controls to better safeguard taxpayer interests, temporary details of compliance and collection staff to taxpayer assistance work, and constraints imposed by the need to process returns and issue refunds. From fiscal year 1996 through fiscal year 2001, most compliance programs showed significant declines in the amount of staff time expended on compliance work, in the number of compliance cases closed, and in the proportion of the workload reviewed to determine whether additional tax assessments were warranted (i.e., coverage). About half of the programs also saw declines in the productivity of the compliance staff (i.e., case closures per hour of staff time), in the amount of unpaid taxes identified, and in the percentage of unpaid taxes resolved (i.e., the proportion of the unpaid taxes collected without involving the two collection programs). While the declines were not universal, they were pervasive, as illustrated by the shaded areas in table 1. The declines occurred over a period when the programs’ workload (e.g., the number of returns filed, apparent nonfilers, or apparent underreporters) was increasing, as also shown in table 1. Compliance coverage fell notably for all compliance programs except returns processing. The declines ranged from about 29 percent to about 69 percent in the five audit and matching compliance programs. Further, the number of cases closed by these programs declined by about 55 percent or more, with the exception of the underreporter program, which declined by about 10 percent. Also, these five compliance programs generally experienced marked declines in the staff time committed to compliance work and, with one exception, the productivity of staff in closing cases. According to the underreporter program staff, the increased use of automation enabled the program to increase productivity but not sufficiently to maintain coverage. In general, the amount of unpaid taxes identified by these compliance programs did not decline as much as the number of cases closed. In two of the six compliance programs, the amount of unpaid taxes identified increased. The data available to us do not make clear the extent to which this increase may represent a change in the type of cases worked, increased levels of noncompliance by taxpayers, or other factors, including inflation. For this period, the data also show a mixed picture with respect to the percentage of unpaid taxes resolved—that is, the percentage of the compliance assessments that the compliance programs collect at the conclusion of their work, without referral to telephone or field collection.The individual audit and corporate audit programs tended to collect a greater proportion of the tax assessments in fiscal year 2001 than in fiscal year 1996, while the nonfiler, underreporter, and other audit programs collected a somewhat reduced proportion of the assessments. In general, the table indicates that the programs that showed the biggest gains in the proportion of unpaid taxes collected also showed the largest declines in the amount of unpaid taxes identified. For example, while the individual audit program showed a 68 percent increase in its collection rate, it also experienced a 43 percent decline in the amount of unpaid taxes identified. Overall, there were almost universal declines in the two collection programs’ performance between fiscal years 1996 and 2001, as indicated by the shaded areas in table 2. While collection workload (i.e., the number of delinquencies assigned to collection) declined somewhat as a result of the reduced levels of compliance work, the programs’ capacity to close collection cases—such as by securing payment or completing sufficient analysis to determine that payment cannot be made at that time—declined much more. Another indicator of the change in the telephone and field collection programs is IRS’s decreasing use of enforcement sanctions, both in absolute numbers and as a proportion of closed collection cases. The number of liens, levies, and seizures dropped precipitously between fiscal years 1996 and 2000 and then increased somewhat during fiscal year 2001. Even with this change, however, table 3 shows that the number of levies and seizures remained 78 and 98 percent below 1996 levels, respectively. Also, when considered as a proportion of closed collection cases, the use of levy and seizure sanctions declined by 64 and 96 percent between fiscal years 1996 and 2001, as shown in table 3. The use of liens showed the most significant turnaround, but as of 2001, the number of lien filings was down 43 percent and as a percentage of case closures was down 6 percent. By March 1999, collection officials recognized that changes were needed. Their case inventory of delinquent accounts was growing and aging, and the gap between their workload and their capacity to complete work was increasing. They recognized that they could not close all collection cases, and they believed that they needed to be able to deal with taxpayers more quickly, particularly taxpayers who were still in business and owed employment taxes. The officials believed that getting to these delinquencies quickly, before they became unmanageable to the taxpayers, would make collection easier and faster. In response, collection managers introduced a new collection case selection system. The selection system delivered to collection staff delinquencies that met newly established collection priorities based on delinquency amount and recency, with priority given to employment tax over income tax delinquencies and to taxpayers who contacted IRS to resolve their delinquencies. The system also periodically reviewed cases in telephone collection and field collection backlog and automatically purged those that met certain aging criteria as a result of having been passed over for more recent delinquencies. The automatic purging was accomplished by closing the collection cases as not collectible. This had the effect of deferring collection action, in that IRS maintained the right to reinitiate collection action. Once collection action has been deferred, however, two conditions must be met before IRS will consider reopening a collection case, according to IRS officials. The two conditions are (1) if the taxpayer becomes delinquent again or if IRS receives information indicating that the taxpayer had additional assets that could help pay off the delinquency and (2) if IRS finds the resources to work the collection cases. The taxpayers will, however, be sent annual notices of taxes due and will be subject to having any refunds from subsequently filed tax returns offset by IRS to cover unpaid taxes. Also, IRS will continue to monitor the deferred collection accounts for possible collection action until IRS’s statutory right to collect the taxes expires, generally 10 years after taxes are assessed. Even though IRS has systems for monitoring deferred collection cases, the senior IRS officials responsible for managing collection programs indicated that, absent significant operational changes, they had little expectation that a telephone or field collection case would be reopened for these tax debts alone once collection action had been deferred. On the basis of our random sample of unpaid tax accounts, we estimate that by the end of fiscal year 2001, after the deferral policy had been in place for about two and one-half years, IRS had deferred collection action on the tax debts of an estimated 1.3 million taxpayers. We also estimate that these 1.3 million taxpayers owed about $16.1 billion in unpaid taxes, interest, and penalties that originated from assessments by all six compliance programs. By fiscal year 2001, IRS was deferring collection action on tax debts at a rate equal to one of three new delinquencies assigned to the collection programs. While the amounts owed by these taxpayers were not inconsequential, we found that, consistent with IRS’s stated collection deferral priorities, these taxpayers owed less and had been delinquent longer than other delinquent taxpayers. We estimate that the median amount owed by the taxpayers for whom collection action was deferred was about $4,500, compared with $5,500 for other delinquent taxpayers in the collection population. Also, the taxpayers for whom collection action was deferred tended to have been delinquent for a longer period of time—about an estimated 5.6 years versus an estimated 3.9 years. A number of factors contributed to the decline in compliance and collection programs. Generally, IRS faced overall staffing declines while it confronted several competing and growing workload demands. Overall, aggregate staffing, measured by full-time equivalents, was about 107,000 in fiscal year 1996 and about 98,000 in fiscal year 2001—about an 8 percent decline; individual income tax returns filed increased from about 119 million in fiscal year 1996 to about 130 million in fiscal year 2001—about a 9 percent increase; and business income returns (corporate and partnership), which are filed by taxpayers that have more complex dealings with IRS, increased by 17 percent, from 6.5 million returns in 1996 to 7.6 million returns in 2001. While overall staffing declined about 8 percent, the impacts on almost all of the compliance and collection programs were generally much larger, as shown in tables 1 and 2. According to IRS senior officials, to assure that the tax returns filed by taxpayers are processed timely and that timely payments are made to taxpayers owed refunds, IRS first allocated its resources to meet the returns processing program’s increasing workload before it funded the other compliance and collection programs. Also, the officials provided data that showed that at the beginning of the six-year period, IRS was adjusting down from compliance and collection staffing increases during the late 1980s and early 1990s. Comparing IRS data on professional staff levels for audit and field collection in fiscal year 2001 with data on the pre-1987 buildup shows a decline of about 21 percent. Also during this period, the IRS Restructuring Act, enacted in 1998, provided additional rights for taxpayers and imposed additional administrative responsibilities on IRS’s compliance and collection programs. For example, prior to IRS’s using enforcement sanctions to collect unpaid taxes, additional notifications and opportunities for appeals were required to be provided to taxpayers. Also, compliance and collection staff were required to keep records of contacts with third parties and to make taxpayers aware of such contacts. Further, collection staff were required to prepare additional documentation, such as certifications that they had verified that the taxes were past due and that the sanctions were appropriate given the taxpayers’ circumstances, and to submit that documentation to a higher-level manager for review and approval. Deviations from this and other requirements of the act may subject compliance and collection staff to disciplinary action, including mandatory termination of employment, for actions such as willfully not obtaining certain required approval signatures or for actions constituting taxpayer harassment. According to some senior officials, the potential for disciplinary action has resulted in IRS compliance and collection staff’s working more slowly and hesitantly, spending much more time documenting their actions. In addition, the act mandated that IRS improve service to taxpayers, such as telephone assistance; following this mandate, IRS undertook a major organizational restructuring and modernization effort. In response to these demands, and with a declining pool of staff resources, IRS reallocated staff from compliance (other than returns processing) and collection programs to provide additional support to taxpayer assistance services. Some of the reallocation was accomplished by allowing attrition to occur without hiring replacement staff for compliance and collection programs, and some was accomplished by temporarily detailing compliance and collection staff to other IRS programs. For example, from fiscal year 1996 to fiscal year 2000, the percentage of field collection professional staff time detailed to supplement taxpayer assistance staff during the tax filing season—in large part assisting taxpayers who requested assistance at IRS offices—grew from about 4 percent of collection time in fiscal year 1996 to about 14 percent in fiscal year 2000 before dropping to 5 percent in 2001. Staff time charged to compliance and collection programs between fiscal years 1996 and 2001 declined in all but one program (i.e., returns processing) and in several instances by 20 percent or more, as shown in tables 1 and 2. According to IRS officials, the demands on resources also affected productivity as indicated by the number of cases closed per staff hour of compliance and collections staff time. The officials said that some of the IRS Restructuring Act requirements, such as suspending collection action to provide time for additional notifications and appeal hearings, increased the amount of staff time and calendar time required to close a case. They also noted that some of the potentially available staff time was consumed in training for the new requirements. In addition, according to IRS officials, from 1996 through 2001 the complexity of cases worked by compliance and collection staff had changed, requiring more time to complete cases. For example, many erroneous claims for tax credits that had been handled by audit prior to fiscal year 1997 were reassigned to returns processing, which could handle the claims largely on an automated basis. IRS officials did not provide us with any quantitative analysis that distinguished between the effects of the IRS Restructuring Act and those of other factors influencing productivity. On the basis of the data available to us, we could not discern the extent to which the changes in productivity were attributable to the act or to other factors. The declines in IRS’s compliance and collection programs affected taxpayers in several ways. Our analysis showed that noncompliance was less likely to be detected by compliance programs and pursued or sanctioned by collection programs. Also, the length of time that taxpayers had owed back taxes when they were assigned to collection increased between fiscal years 1996 and 2001, although IRS intended that by deferring collection action on some older collection cases, it could get to newly assigned cases quicker. For the deferred cases, penalties and interest continue to accumulate, making future payment of those assessments increasingly demanding. Taken together, these changes have reduced the incentives for voluntary compliance, a concern of IRS senior managers. Some available, but very limited, data suggest that voluntary compliance may have begun to deteriorate. The data presented earlier on the changes in IRS’s compliance and collection programs showed that the likelihood that taxpayer noncompliance would be detected and pursued by IRS declined between fiscal years 1996 and 2001. For example, in situations where IRS had information that a tax return was due but not filed, the rate of IRS compliance follow-up declined about 69 percent. In situations where IRS had information that a tax return understated the amount of taxes owed, the decline in follow-up was about 29 percent. Moreover, even when compliance follow-up took place and the taxpayers were found to owe back taxes, because of IRS’s practice of deferring collection action, the taxpayers had about a one in three chance of not being pursued by IRS collection staff. And, if pursued, the delinquent taxpayers were about 64 percent less likely to experience an enforced collection action such as the levying of their assets. These changes reduced the incentives to comply with the tax laws. Although IRS intended that by deferring collection action on some tax debts it would be able to initiate collection action for some higher-priority cases sooner, our random samples showed that the median length of time that taxpayers had owed back taxes at the time they were assigned to collection increased between 1996 and 2001. We estimate these increases as follows: Taxpayers who were assigned to collection as of the end of fiscal year 1996 had owed back taxes for about 1.2 years when they were assigned to telephone or field collections. Taxpayers who were assigned to collection as of the end of fiscal year 2000 had owed back taxes for about 1.3 years when they were assigned to telephone or field collections Taxpayers who were assigned to collection as of the end of fiscal year 2001 had owed back taxes for about 1.6 years when they were assigned to telephone or field collections. On the basis of our analysis of randomly sampled collection case files related to taxpayers who were delinquent at the end of fiscal year 2000 (2001 case files were not available at the time of our field work), we noted that much of this timeframe was attributable to the concluding of interim matters—for example, resolving questions on the amount of the taxpayers’ tax liability or providing time for the taxpayers to make periodic payments. When we factored in the time taken to conclude these interim matters, our sample showed that, on average, the taxpayers had been potentially eligible for collection actions for about 6 months when they were contacted by collection staff. Also, on comparing the sampled collection cases that were initiated before and after IRS started deferring collection cases, we found no statistically significant difference in the timing of the collection action. Accordingly, as shown by our samples, deferring some collection cases helped IRS to keep its collection caseload from ballooning but did not improve collection timeliness. According to IRS senior officials, some recent procedural changes, designed to speed up the assignment of priority cases to collection staff, should improve the timing of the collection actions that are initiated. As expected, our random sample of unpaid tax assessments as of the end of fiscal year 2001 showed that taxpayers for whom collection action was deferred were statistically different from taxpayers who were assigned to telephone and field collection. We estimate these differences as follows: Taxpayers who were assigned to telephone or field collection were about three times as likely to have made payments on their delinquencies during the previous year as those for whom collection action was deferred. Taxpayers for whom collection action was deferred owed about 7 times more in penalties and interest as a percentage of their income (or of payroll for businesses) than the taxpayers who were assigned to collection. Not surprisingly, these differences indicate that follow-up by telephone or field collections may have a strong impact on generating payment on tax liabilities and preventing a buildup of penalties and interest. In turn, deferring collection action to a later date would make resolution of the delinquencies more demanding on affected taxpayers. Improving voluntary compliance—the percentage of the taxes owed that taxpayers voluntarily report and pay—is a major goal of IRS’s compliance and collection programs. Although the compliance and collection programs may focus on noncompliant taxpayers, IRS believes that the deterrent effect of the programs influences the compliance of all taxpayers. Currently, IRS does not have a measure of voluntary compliance. The declines in IRS’s compliance and collection programs that occurred from fiscal year 1996 through fiscal year 2001 have reduced some of the incentives useful for (1) inducing noncompliant taxpayers to become compliant and (2) reassuring compliant taxpayers that they are not being disadvantaged by voluntarily reporting and paying the full amount of taxes that they owe. Because only a little more than two years of data were available for analyzing taxpayers for whom collection action was deferred, we were not able to determine whether the deferral will have any long- term effects on the taxpayers’ future payment compliance and on the amount of interest and penalties owed. If no action is taken to collect the delinquent tax, however, the motivation to pay the taxes owed is reduced. Available, but very limited, data suggest that voluntary compliance may have deteriorated. For example, the growth in the number of apparent nonfilers (i.e., individuals who have not filed tax returns, as identified by IRS document matching over the period fiscal years 1996 to 2001) increased about three and one-half times faster than the tax filing population. Similarly, the number of apparent underreporters increased about one and one-half times faster. As discussed in the following section, compliance trends are a concern of IRS senior managers. The strategic assessments prepared by the wage and investment and small business operating divisions identified the risk of declining compliance as a major issue for IRS. These assessments, part of IRS’s new strategic planning, budgeting, and performance management process, also proposed a number of compliance and collection initiatives to address noncompliance. The operating divisions could not quantify the impact that their initiatives are expected to have on compliance, because IRS is several years away from finishing a system for making compliance estimates. However, as a partial substitute for such information, the assessments could have provided quantitative information on the expected impacts of the initiatives on compliance and collection programs. To make decisions for fiscal year 2002 and subsequent year operations, IRS implemented a new strategic planning, budgeting, and performance management process during fiscal year 2000. The process begins, as outlined in figure 2, with the operating divisions’ preparing strategic assessments. After receipt and review of the strategic assessments, the commissioner provides detailed guidance (step 2) to the operating divisions for developing their strategy and program plans (step 3). These plans are then incorporated (step 4) into an IRS-wide performance plan (which sets out measurable objectives such as the number of audits to be done). These plans are, in turn, incorporated into IRS’s budget justification (which sets out its resource requests to Congress). The remaining steps (5 and 6) involve allocating resources across IRS divisions and programs and monitoring division adherence to the planning and budgeting decisions. According to IRS senior management, the strategic assessments are intended to provide “big picture” information for making decisions on significant operational changes. To obtain that decision-making information, senior management instructed the operating divisions to prepare brief strategic assessment documents that summarize important trends, issues, and problems facing the operating divisions and IRS and proposals for dealing with those trends, issues, and problems. The operating divisions were instructed to describe the trends, issues, and problems, using quantifiable, measurable data when possible. Also, in proposing changes, the operating divisions were to determine the most critical trends requiring attention by considering their impact on the achievement of IRS’s goals. These goals included increasing taxpayer compliance and increasing the fairness of the compliance programs. The planning process helps IRS to implement the Government Performance and Results Act (Results Act). The act’s goal was to improve the management of federal programs by having federal agency decision making focus on impacts (i.e., the measurable results achieved by their programs). The agencies were required to periodically develop strategic plans, identify measures for assessing progress in achieving plan goals, and use the measures to report on the progress in meeting plan goals. Operationalizing the act’s mandate was left to the agencies. We have reported in the past that IRS’s approach, designed to reconcile competing priorities and initiatives with the realities of available resources, has helped it to make progress in defining its strategic direction. In addition, IRS’s strategic plans and budgets are reviewed by an oversight board before they are submitted to the Congress. The board was established by the IRS Restructuring Act as a means of providing Congress with advice on IRS’s strategic plans and budget. In the strategic assessments that we reviewed, both the wage and investment and small business operating divisions recognized that declines in their compliance and collection activities created a risk that taxpayer compliance could be negatively affected. Some of the identified risks included the potential for decreased tax collections, potential for increased numbers of nonfilers, and potential for increased underreporting of taxes owed. To counter declines in both compliance and collection activities and to deal with the potential risks, the IRS operating divisions identified a number of changes warranting priority attention, including the need to reengineer the audit process, reengineer the collection process, reevaluate the telephone collection selection criteria for individuals, use more document matching to identify underreporters, and increase audit and underreporter program resources. The proposed initiatives have the potential both to increase compliance and collection activities and to rebalance those activities. For example, if productivity gains result from collection process reengineering, the collection staff will be able to close additional delinquency cases. Also, according to IRS officials, additional delinquencies could possibly be closed by outsourcing some collection activities. The officials indicated that outsourcing is an issue being studied by the collection reengineering team. Because of a lack of information, the operating divisions’ strategic assessments could not quantify the impact that their changes may have on taxpayer compliance. Currently, IRS lacks reliable data on the voluntary compliance rate and information on how IRS’s compliance and collection programs influence that rate. In May 2000, IRS established a research office to develop a new approach for measuring compliance. However, IRS will not have new data on the compliance rate for individuals and businesses for several years and could take longer to develop estimates of how compliance and collection programs influence the rate. The assessments that we reviewed, however, missed opportunities to at least partially compensate for the lack of quantitative estimates of the impact that the proposed changes would have on compliance. Some examples of the type of quantitative estimates that could be provided are suggested by the information that we presented in the first two sections of this report. Such quantitative estimates might include the impact on compliance and collection workload, coverage, cases closed, staffing, productivity, dollars of unpaid taxes identified, and percentage of taxes resolved. Other quantitative estimates could address the benefits and costs associated with the proposed changes. Although such estimates would be only a partial substitute for an estimate of the impact on the ultimate result, compliance, they would provide quantitative information about the expected impact on the declines in compliance and collection programs and the growing gap between them. Senior managers told us that the strategic assessments were important, providing the starting point in the management decision-making chain for rationing IRS’s limited resources to the most important priorities. The officials also said that even if the basis for some aspects of strategic decision making, such as the balance between compliance and collection programs, were not explicitly addressed in the strategic assessments, they believed that sound decisions had been made as a result of IRS’s implementing the new strategic planning process. For example, they indicated that the collection reengineering effort had the potential to affect the balance between collection and compliance activities. They said that as managers gained more experience with the process, their strategic assessment reports would improve. Estimating the impacts of proposed initiatives would have some costs and could lengthen a document intended to be “strategic” and therefore brief. However, both IRS guidance and the Results Act emphasize the value of quantitative information related to performance, especially the impact or results that programs were achieving. Some quantitative information about the expected impacts of proposed program changes could provide IRS senior managers a fuller understanding of the trade-offs involved in planning the allocation of IRS resources to compliance and collections programs. In addition, quantitative information might have other benefits. Internally, it might provide lower-level managers not directly involved in strategic decision making a better understanding of the reasons for decisions and expected results. Externally, quantitative information from the strategic assessments might facilitate decision making by Congress, the oversight board, and others. For example, quantitative information from the strategic assessments could be incorporated into documents going to Congress, such as the annual budget request. The commissioner and senior managers recognize that the declines in IRS’s compliance and collection programs are a strategic problem that puts a major part of the agency’s mission, ensuring compliance with the tax laws, at risk. Problems of this magnitude, involving the level of IRS resources and the allocation of those resources within the agency, must be dealt with by top management and external stakeholders including Congress and the oversight board. To facilitate such decision making, IRS has implemented its new strategic planning, budgeting, and performance management process. Strategic assessments are the basis for the process and, by extension, are also part of the basis for decisions about IRS’s budget and strategy made by Congress and others. We support IRS’s new approach to strategic planning, an approach that seeks to integrate planning and budgeting based on quantifiable information with management decision making. We also recognize that IRS’s strategic assessments, and thus strategic planning, are constrained by the absence of data on the impact that IRS operations have on taxpayer compliance. Nonetheless, opportunities exist to make the strategic assessments more informative. How much quantitative information should be provided in strategic assessments is a decision for the primary users of the assessments, IRS’s top managers. Making the strategic assessments more quantitative will not resolve IRS’s strategic problems but could contribute more information to the decision-making process. Based on experience to date using strategic assessments, we recommend that the commissioner of internal revenue reexamine the extent to which some quantitative information on the impact of proposed program changes should be included in strategic assessments. The commissioner of internal revenue provided written comments on a draft of this report in a May 13, 2002, letter, which is reprinted in appendix II. The commissioner agreed with our findings and recommendation and described some ongoing efforts to improve productivity and to reverse declines in compliance and collection programs. He said that steps were being taken to increase the use of quantitative data in strategic decision making, including the development of a methodology for assessing costs and benefits that will be refined as IRS proceeds through future planning cycles. As arranged with your office, we plan no further distribution of this report until 30 days from its date of issue, unless you publicly announce its contents earlier. After that period, we will send copies to Representative William M. Thomas, chairman, and Representative Charles B. Rangel, ranking minority member, House Committee on Ways and Means; Representative William J. Coyne, ranking minority member, Subcommittee on Oversight, House Committee on Ways and Means; and Senator Max Baucus, chairman, and Senator Charles E. Grassley, ranking republican member, Senate Committee on Finance. We will also send copies to the Honorable Paul H. O’Neil, secretary of the treasury; the Honorable Charles O. Rossotti, commissioner of internal revenue; the Honorable Mitchell E. Daniels, Jr., director, Office of Management and Budget; and other interested parties. Copies of this report will be made available to others on request. In addition, the report will be available at no charge on the GAO Web site (www.gao.gov). If you have any questions, please contact me or Thomas Richards at (202) 512-9110. Key contributors to this report are acknowledged in appendix III. As requested, the objectives of GAO’s review were to describe the changes since 1996 in IRS’s compliance and collection programs, including the extent of collection deferrals, and the factors contributing to the program changes; determine how the program changes have affected taxpayers, including their compliance with tax laws, the buildup of penalties and interest, and the length of time before collection actions are initiated; and determine how IRS addressed the program changes, including their effect on taxpayers, in its strategic assessments. We first identified IRS’s major compliance and collection programs. On the basis of our analysis of IRS data system reports and discussions with IRS officials, we identified 8 such programs. In general, compliance programs are designed to assure that taxpayers fairly and accurately report and pay the amount of taxes that they owe. Collection programs are to follow up with taxpayers to obtain payment and initiate enforcement action if taxpayers become delinquent by not paying their tax after being sent notices. Descriptions of the compliance and collection programs appear in table 4. As indicated by table 4, the separation point between compliance and collection programs is the point at which a taxpayer is determined to be delinquent. At that point, the taxpayer has not paid taxes after being notified by the compliance component of the amount due and after being sent collection notices—usually three or more to individuals and two to businesses. To measure changes across the eight programs, now managed by four divisions, we identified performance indicators that would be common to their operations and that, in the aggregate, would provide an overview of the long-term direction of IRS’s compliance and collection programs. The indicators were not intended to provide a comprehensive evaluation of program performance. Rather, the indicators were to provide a general overview of changes in compliance and collection workload, staff resources committed to dealing with the workload, productivity of the resources, work completed, extent of the workload addressed, and work outcomes in terms of unpaid taxes identified and unpaid taxes resolved. Table 5 provides a general description of the seven performance indicators, together with a more detailed description of the indicators as they apply to the eight compliance or collection programs. To compile the performance data related to these performance indicators, we substantially relied on the data output from various IRS information systems. For example, IRS uniformly collects data on resources, typically staff hours or staff years used, as program input. We did not evaluate the internal controls over the collection and processing of this IRS information system data. Because IRS managers routinely use the information from these systems to manage program operations, we believe that the information is appropriate for use in compiling an overview of program changes. The routine output from the IRS data systems alone, however, was not sufficient to provide data on the following: 1. The nonfiler program. Because IRS’s nonfiler program involved cases managed by several IRS divisions, we worked with IRS managers to consolidate data from a number of different sources to compile the trend data. In doing this, we obtained data from IRS’s Audit Management Information System, collection reports prepared through IRS’s Integrated Data Retrieval System, and supplemental data prepared by IRS’s Nonfiler Program office in response to internal IRS requirements. 2. The amount of additional tax assessments made by the returns processing program. To develop data on compliance assessments made by returns processing, we obtained a data extract from IRS’s Enforcement Revenue Information System (ERIS). That system was designed by IRS to accumulate collection data for assessments that result from IRS’s compliance work. From the ERIS data extract, we first derived data to indicate the amount of additional tax assessments made by returns processing compliance work, such as identifying returns with a balance due or making assessments to correct errors or omissions identified on the tax returns. To do this, given the manner in which IRS accounts for the assessments, we identified the total assessments made by returns processing, other than returns that were filed with full payment or with no errors requiring an IRS notice; identified the amount of payment made on the accounts prior to IRS’s notifying the taxpayer of the amount due; and subtracted the prenotification payments from the total assessments. We subtracted the prenotification payments from the total assessments in order to eliminate taxes that were voluntarily reported and paid by taxpayers. 3. The amount of unpaid tax assessments resolved by the five compliance programs. From the ERIS data, we also derived data to compute the percentage of unpaid tax assessments that were collected by the five compliance programs. We included as collections any payments made by the taxpayers in response to written notices sent to the taxpayers. Data were not available to extend the analysis period for collections for fiscal years beyond 2001. Therefore, to ensure comparable collection data on the proportion of fiscal year 1996 and 2001 assessments collected by the assessing compliance program, we limited the collection period to the fiscal year of the assessment. We also interviewed officials from IRS’s operational divisions responsible for wage earners, small businesses, and large and midsized businesses to obtain an understanding of compliance and collection programs and to discuss reasons for compliance and collection trends. We provided our performance trends and supporting computations to IRS staff, who reviewed and commented on our analyses. To determine how these changes have affected taxpayers, we examined two samples of IRS data. The first sample from IRS’s automated masterfile records of unpaid tax accounts provided the data to examine overall changes in the number of accounts with unpaid balances; changes in the characteristics of delinquent taxpayers, such as the amounts of interest and penalties owed; and the age of the accounts. In order to examine contacts between taxpayers and the IRS and events affecting the timeliness of resolution, we examined a sample of taxpayer collection case files. To analyze the delinquent taxpayer characteristics, we selected a random sample of taxpayers who had an unpaid tax assessment outstanding at the end of fiscal years 1996, 1998, 2000, and 2001. In developing this sample, we partitioned the population into different groups, or strata, based on the collection status of their modules on September 30 of each year and by type of taxpayer (i.e., individuals and businesses). We stratified this sample to ensure that taxpayers at different stages in IRS’s collection process were represented. Once we had selected a sample of taxpayers, we also obtained information on recent payments made to IRS by those taxpayers and information from recent tax returns filed by the taxpayers and posted to IRS data systems. To review this sample, we used analysis software to produce statistically reliable estimates of the characteristics of the population of taxpayers whom IRS had identified as not having paid their taxes as of the end of the four fiscal years. To examine data on collection actions taken with respect to taxpayers— such as the length of time from when the delinquencies became available for assignment to telephone or field collection staff to when the taxpayers were contacted by the collection staff—we used a random sample that was taken as part of our audit of IRS’s financial statement for fiscal year 2000. This sample consisted of randomly selected unpaid tax assessments that were owed by 520 taxpayers. From this sample, we reviewed IRS collection case file documentation on 108 taxpayers, that is, those taxpayers who had been assigned to either telephone or field collection and about whom IRS had sufficient collection case file documentation for us to analyze. IRS had limited the accumulation of case files to those relevant to estimation of the collectibility of its accounts receivable. We analyzed these cases with two different data collection instruments. The first captured (1) dates that distinct field collection phases started and ended, (2) occurrences of IRS collection contacts or attempted contacts with taxpayers, and (3) the disposition of cases when the collection phases ended. The second data collection instrument captured information from IRS’s masterfile records on these taxpayers. This information described the number of delinquencies, type of taxes owed, and dollar amounts. It also provided a history of the collection-related transactions (i.e., payments, defaults on installment agreements, or litigation pending) for the taxpayer. Because our estimates come from random samples, there is some sampling error associated with them. We express our confidence in the precision of our results as a 95 percent confidence interval around the estimate. For example, for the estimate of 1.3 million taxpayers, the actual value would be between 1.25 million and 1.35 million taxpayers. All percentage estimates from the samples have sampling errors of ±5 percentage points or less, unless otherwise shown in footnotes to the report text. All numerical estimates other than percentages have sampling errors of ±5 percent or less of the value of those numerical estimates, unless otherwise shown in footnotes to the report text. With respect to objective 3, we reviewed the strategic assessments made by the IRS operating division responsible for individual taxpayers other than the self-employed and by the operating division responsible for small businesses and self-employed individuals. The assessments were made during the first half of fiscal year 2001 for consideration by senior management in developing strategy and program plans for fiscal year 2002 and 2003. We reviewed the strategic assessments along with IRS instructions for preparation of the assessments. Our review of these documents focused on identifying how the strategic assessments addressed the compliance and collection trends and taxpayer impacts that we identified in response to objectives 1 and 2. We also interviewed IRS strategic planning, small business, and wage and investment officials responsible for developing and monitoring strategic plans. We did not evaluate the strategic assessment’s selection of IRS’s priorities, proposed improvement projects, and resources needed to implement the projects. Evaluating the operating division’s plans would require an assessment of IRS’s entire strategic planning process, which was outside the scope of this assignment. We performed our work at IRS’s national office in Washington, D.C.; IRS’s Kansas City Submissions Processing Center, Missouri; and IRS’s Oakland, California, area office between October 2000 and April 2002 and in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the commissioner of internal revenue. We received written comments from the commissioner on May 14, 2002. The comments are reprinted in appendix II and discussed in our report. In addition to those named above, Leon Green, Mary Jankowski, John Mingus, Sam Scrutchins, Anjali Tekchandani, Thom Venezia, Wendy Ahmed, Cheryl Peterson, Susan Baker, Anne Laffoon, Kristina Boughton, and Avram Ashery made key contributions to this product. | For several years, Congress and others have been concerned about declines in the Internal Revenue Service's (IRS) compliance and collection programs. Taxpayers' willingness to voluntarily comply with the tax laws depends in part on their confidence that their friends, neighbors, and business competitors are paying their share of taxes. GAO found large and pervasive declines in five of the six compliance programs and in both collection programs between fiscal years 1996 and 2001. Factors contributing to the declines in the program and in collection coverage include declines in IRS staffing, increased workloads, and increased procedural controls mandated by Congress to better safeguard taxpayer interests. The declines in IRS's compliance and collection programs had several impacts. The likelihood that taxpayer noncompliance would be detected and pursued by IRS declined and the length of time that taxpayers owed back taxes at the time that they were assigned to collection increased between 1996 and 2001. The amount of penalties and interest continued to accumulate on deferred collection cases, making future payment increasingly demanding if subsequently pursued by IRS. Strategic assessments, which were prepared to provide a basis for decisions on significant program changes in IRS dealings with individual and small business taxpayers, identified the risk of declining compliance as a major trend, issue, or problem for IRS. The assessments could not quantify the impact that the initiatives may have on taxpayer compliance because IRS has yet to implement a system to measure taxpayer compliance. |
The Homeland Security Act of 2002 created DHS and brought together the workforces of 22 distinct agencies governed by multiple legacy rules, regulations, and laws for hundreds of occupations. The department’s 216,000 employees include a mix of civilian and military personnel in fields ranging from law enforcement, science, professional, technology, administration, clerical professions, trades, and crafts. DHS has a vital role in preventing terrorist attacks, reducing our vulnerability to terrorism, and minimizing the damage and facilitating the recovery from attacks that do occur. The National Strategy for Combating Terrorism calls on all government agencies to review their foreign language programs. Further, the National Strategy for Homeland Security articulates activities to enhance government capabilities, including prioritizing the recruitment and retention of those having relevant language skills at all levels of government. The 9/11 Commission, a statutory bipartisan commission created in 2002, concluded in 2004 that significant changes were needed in the organization of government, to include acquiring personnel with language skills and developing a stronger language program. DHS has a variety of law enforcement and intelligence responsibilities that utilize foreign language capabilities. For example, DHS undertakes immigration enforcement actions involving thousands of non-English- speaking foreign nationals and conducts criminal investigations that cross national borders, among other things. Conducting investigations and dismantling criminal organizations that transport persons and goods across the borders illegally are operations where foreign language capabilities help DHS to identify and effectively analyze terrorist intent. DHS also reports that foreign language capabilities enhance its ability to more effectively communicate with persons who do not speak English to collect and translate intelligence information related to suspected illegal activity. At the component level, Coast Guard, CBP, and ICE are among DHS’s largest components with law enforcement and intelligence responsibilities that have a potential use of foreign language capabilities. Table 1 briefly describes the law enforcement and intelligence roles and responsibilities of these components. OCHCO is responsible for departmentwide human capital policy and development, planning, and implementation. In this role, OCHCO works with the components to ensure the best approach for the department’s human capital initiatives. Specifically, OCHCO establishes DHS-wide policies and processes and works with components to ensure that the policies and processes are followed to ensure mission success. Additionally, OCHCO provides strategic human capital direction to and certification of departmental programs and initiatives, such as DHS’s foreign language capabilities. The Coast Guard is a multi-mission agency, the only military agency within DHS, and serves as the lead agency for maritime homeland security, enforcing immigration laws at sea. In support of DHS’s mission to control U.S. borders, the Coast Guard’s Ports, Waterways, and Coastal Security mission goal is to manage terror-related risk in the maritime domain. Additionally, its responsibilities include (1) interdicting undocumented persons attempting to illegally enter the United States via the maritime sector and (2) boarding vessels to conduct inspections and screenings of crew and passengers in its attempt to reduce the number of illegal passenger vessels entering the United States, among other things. For example, Coast Guard Maritime Safety and Security Teams conduct patrols and monitor migration flow from countries neighboring the Caribbean Basin, including Colombia, Venezuela, Haiti, and the Dominican Republic. In fiscal year 2009, the Coast Guard increased its presence in the vicinity of Haiti to deter mass migration and interdicted nearly 3,700 undocumented persons attempting to illegally enter the United States. Additionally, during fiscal year 2009, the Coast Guard reported screening over 248,000 commercial vessels and 62 million crew and passengers for terrorist and criminal associations prior to arrival in U.S. ports, identifying 400 individuals with terrorism associations. The Coast Guard conducts approximately 10,000 law enforcement boardings while interdicting drugs each year in the southern Caribbean, which is where the Coast Guard is likely to encounter non-English speakers. CBP is the federal agency in charge of securing U.S. borders and three of its offices—the Offices of U.S. Border Patrol, Air and Marine, and Field Operations—share a mission of keeping terrorists and their weapons from entering the United States while carrying out its other responsibilities, including interdicting illegal contraband and persons seeking to enter at and between U.S. ports of entry while facilitating the movement of legitimate travelers and trade. CBP regularly engages with foreign nationals in carrying out its missions and is DHS’s only component authorized to make final admissibility determinations regarding arrivals of cargo and passengers. Annually, CBP reports that it has direct contact with approximately 1 million people crossing borders through ports of entry each day. It is through these contacts that CBP has a potential likelihood of encountering non-English speakers. As a result, foreign language skills are needed to assist CBP federal law enforcement officers in enforcing a wide range of U.S. laws. In 2009, CBP encountered over 224,000 undocumented immigrants and persons not admissible at the ports of entry. CBP employs over 45,000 employees, including border patrol agents stationed at 142 stations with 35 permanent checkpoints, Air and Marine agents and officers, and CBP officers and agriculture specialists stationed at over 326 ports of entry located at airports, seaports, and land borders along more than 5,000 miles of land border with Canada, 1,900 miles of border with Mexico, and 95,000 miles of U.S. coastline. Border patrol agents work between the ports of entry to interdict people and contraband illegally entering the United States. CBP’s Office of Air and Marine manages boats and aircraft to support all operations to interdict drugs and terrorists before they enter the United States. CBP officers work at foreign and domestic ports of entry to prevent cross-border smuggling of contraband, such as controlled substances, weapons of mass destruction, and illegal goods. ICE is the largest investigative arm of DHS, with more than 20,000 employees worldwide. ICE has immigration and custom authorities to prevent terrorism and criminal activity by targeting people, money, and materials that support terrorist and criminal organizations. ICE and three of its offices—the Offices of Detention and Removal Operations, Investigations, and Intelligence—identifying, apprehending, and investigating threats arising from the movement of people and goods into and out of the United States. In fiscal year 2009, the Office of Detention and Removal Operations completed 387,790 removals, 18,569 more than in fiscal year 2008. ICE’s Office of Investigations investigates a broad range of domestic and international activities arising from illicit movement of people that violates immigration laws and threatens national security. For example, investigations where there is a potential use of foreign language capabilities include those for human trafficking and drug smuggling, illegal arms trafficking, and financial crimes. In 2009, ICE initiated 6,444 investigations along U.S. borders. ICE’s Office of Intelligence is responsible for collecting operational and tactical intelligence that directly supports law enforcement and homeland security missions. Strategic workforce planning helps ensure that an organization has the staff with the necessary skills and competencies to accomplish strategic goals. We and OPM have developed guidance for managing human capital and developing strategic workforce planning strategies. Since 2001, we have reported strategic human capital management as an area with a high risk of vulnerability to fraud, waste, abuse, and mismanagement. In January 2009, we reported that while progress has been made in the last few years to address human capital challenges, ample opportunities exist for agencies to improve in several areas. For example, we reported that making sure that strategic human capital planning is integrated with broader organizational strategic planning is critical to ensuring that agencies have the talent and skill mix they need to address their current and emerging human capital challenges. Our and OPM’s workforce planning guidance recommends, among other things, that agencies (1) assess their workforce needs, such as their foreign language needs; (2) assess current competency skills, such as foreign language capabilities; and (3) compare workforce needs against available skills to identify any shortfalls, such as those related to foreign language capabilities. DHS has taken limited actions to assess its foreign language needs and capabilities and to identify potential shortfalls. DHS efforts could be strengthened if it conducts a comprehensive assessment of its foreign language needs and capabilities and uses the results of this assessment to identify any potential shortfalls. By doing so, DHS could better position itself to manage its foreign language workforce needs to help fulfill its organizational missions. DHS has not comprehensively assessed its foreign language needs because, according to DHS senior officials, there is no legislative directive for the department to assess its needs for foreign languages. As a result, DHS lacks a complete understanding of the extent of its foreign language needs. According to DHS officials, the department relies on the individual components to address their foreign language needs. However, while some DHS components have conducted various foreign language assessments, these assessments are not comprehensive and do not fully address DHS’s foreign language needs for select offices or programs consistent with strategic workforce planning. Specifically, the components’ foreign language assessments assess primarily Spanish language needs rather than comprehensively addressing other potential foreign language needs their workforces are most likely to encounter in fulfilling their missions. While DHS’s Human Capital Strategic Plan discusses efforts to better position the department to have the right people in the right jobs at the right time, DHS has not linked these efforts to addressing its workforce’s foreign language needs. DHS’s strategic plan acknowledges the department’s multifaceted workforce and the complexity of DHS operations, and envisions “a department-wide approach that enables its workforce to achieve its mission,” but it does not discuss how its planned efforts will help ensure that the workforce’s foreign language needs are met. Further, the DHS Quadrennial Homeland Security Review, which was completed in February 2010, does not address foreign language capabilities and needs. The Implementing Recommendations of the 9/11 Commission Act of 2007 called for each quadrennial review to be a comprehensive examination of the homeland security strategy of the nation, including recommendations regarding the long-term strategy and priorities of the assets, capabilities, budget, policies, and authorities of the department. As we previously reported, strategic human capital planning that is integrated with broader organizational strategic planning is critical to ensuring that agencies have the talent and skill mix they need to address their human capital challenges. While the department states that there is no legislative directive for it to assess its foreign language capabilities and relies on the individual components, considering foreign language capabilities when setting its strategic future direction would help DHS to more effectively guide its efforts and those of its components in determining the foreign language needs necessary to achieve mission goals and address its needs and any potential shortfalls. The extent to which components have conducted language assessments of their foreign language needs varies. These assessments were limited primarily to Spanish as well as the needs of the workforce in certain offices, locations, and positions rather than comprehensive assessments addressing multiple languages and needs of the workforce as a whole. Table 2 shows the various assessments that were conducted at the component level and in certain offices. Coast Guard. Since 1999, the Coast Guard has conducted three assessments that identified the need for certain foreign language capabilities, which have resulted in the Coast Guard establishing requirements for certain foreign languages skills related to 12 mission- critical languages and foreign language positions for the foreign language award program. Additionally, according to the Coast Guard’s Foreign Language Program Manager, by obtaining information from Coast Guard leadership and operational units, the Coast Guard determines what languages are encountered most in the field. Additionally, the official stated that annual reviews are conducted to determine how best to allocate the Coast Guard’s foreign language linguist and interpreter positions. A “linguist” is expected to use his or her foreign language skills on an almost daily basis in support of a specific function within his or her unit, while interpreting is a collateral duty that can be filled by any qualified personnel. According to Coast Guard officials, they face difficulty in meeting their foreign language needs because of the difficulties experienced by personnel in obtaining qualifying proficiency scores on the Defense Language Proficiency Test (DLPT). To meet foreign language program requirements, DLPT testing results are used to make allocation decisions for foreign language speakers. For example, according to the Foreign Language Program Manager, at one of its offices near Brownsville, Texas, the Coast Guard has native Spanish-speaking personnel who successfully use Spanish during operations but are not testing high enough on the DLPT and thus are not considered during allocation decisions for foreign language needs. CBP. CBP has conducted two assessments since 2004 that have primarily focused on Spanish language needs. CBP’s needs assessments are based on a task-based analysis. For example, CBP assessed critical tasks necessary to carry out certain operations, such as its officers requesting and analyzing biographical information from persons entering the United States and addressing suspects attempting to smuggle people, weapons, drugs, or other contraband across borders. These encounters may require foreign language skills, primarily Spanish for offices such as the U.S. Border Patrol, the Office of Air and Marine, and the Office of Field Operations. However, CBP’s foreign language assessment for its Office of Field Operations included only those CBP officers located along the southwest border, in Miami, and in Puerto Rico, and this assessment did not include its foreign language needs in other field offices around the country. CBP’s U.S. Border Patrol conducted similar assessments, which focused on assessing its foreign language training program, while the Office of Air and Marine’s foreign language assessment determined the extent of its Spanish language needs and, as a result, established its Spanish language training program. ICE. According to ICE officials, rather than conducting foreign language needs assessments, ICE primarily identifies its needs based on daily activities. That is, ICE relies on its agents’ knowledge of foreign languages they have encountered most frequently during their daily law enforcement and intelligence operations. However, ICE has not collected data on what those daily needs are. Without such data, ICE is not in a position to comprehensively assess its language needs. According to ICE officials, in 2007, ICE reinstated the Spanish language requirements that were in place prior to the formation of DHS for its Office of Detention and Removal Operations. Further, for its Offices of Investigations and Intelligence, it utilizes foreign language interpreter services by contract for foreign languages necessary, including Spanish. The components’ efforts to assess their foreign language needs are varied and not comprehensive. Specifically, the assessments have been limited to certain languages, locations, programs, and offices. As a result, component officials we spoke with identified foreign language needs that are not captured in these assessments, such as the following: In the five CBP and ICE offices we visited near the Mexican border, we were told that they have encountered foreign language needs for variations of Spanish language skills, such as Castilian, border, and slang Spanish (that is, Spanish dialects in certain geographic regions that use words and phrases that are not part of the official language). According to ICE officials, in 2009, its Office of Detention and Removal Operations experienced a need for Mandarin Chinese language skills because of an influx of encounters with Chinese speakers near the Mexican border. However, CBP and ICE have not assessed their needs for Chinese speakers. In the three CBP and ICE offices we visited near the Canadian border, we were told that their encounters primarily involve Spanish, Arabic, and Quebecois French speakers. However, CBP and ICE have not assessed their needs for Arabic and Quebecois French speakers. In the seven Coast Guard, CBP, and ICE offices we visited in the Caribbean region, we were told that they primarily encounter Puerto Rican and slang Spanish, Haitian-Creole, and Patois. Although the Coast Guard has assessed its need for some of these languages, CBP and ICE have not assessed their needs in these languages. Coast Guard, CBP, and ICE offices in New York report that their primary language needs include Colombian Spanish, Arabic, Chinese, Urdu, and Fulani. Although the Coast Guard has assessed its need for these languages, CBP and ICE have not assessed their needs for Arabic, Chinese, Urdu, and Fulani. According to DHS officials, foreign language skills are an integral part of the department’s operations. Coast Guard, CBP, and ICE officials in the seven components generally agreed that a comprehensive approach to conducting a foreign language needs assessment would be beneficial. By conducting a comprehensive assessment, DHS would be in a better position to address its foreign language needs. In addition, this assessment would enable the Coast Guard, CBP, and ICE to comprehensively assess their component-level foreign language needs. DHS, including the Coast Guard, CBP, and ICE, has not comprehensively assessed its existing foreign language capabilities. However, components have various lists of staff with foreign language capabilities, as shown in table 3. Although DHS and its components maintain these lists that identify some of their staff with foreign language capabilities, these lists generally capture capabilities for personnel in certain components or offices, primarily those that include a foreign language award program for qualified employees. These include the Coast Guard, CBP’s Office of Field Operations, and ICE’s Office of Investigations. Coast Guard. The Coast Guard, through its foreign language award program for foreign language skills, has developed a list that identifies personnel with certain proficiencies in one or more authorized foreign languages and meets program requirements. For example, the list identifies a Coast Guard member with a certain proficiency level in Spanish at the Miami Sector office. However, these lists contain the personnel voluntarily identified as speaking an authorized foreign language and have successfully met the program’s requirements and receiving award payments. While this list identifies some personnel who speak at least one of the 12 authorized languages, it does not account for personnel who successfully carry out an operation utilizing their foreign language skills but are unable to meet the proficiency requirements per the DLPT. According to the Foreign Language Program Manager, a challenge exists in assigning foreign language speakers while aligning their foreign language proficiencies per the DLPT to the operational needs in the field. As a result, personnel who speak a foreign language are being utilized but are not considered part of Coast Guard’s foreign language capabilities and are unable to receive foreign language award payments. In May 2010, the Coast Guard made some changes to its foreign language program and expanded compensation requirements to include other proficiency levels and award payments, which could improve its ability to identify foreign language resources that were unaccounted for prior to this change to meet its foreign language needs. CBP. CBP, through its foreign language award program in its Office of Field Operations, has developed a list that identifies CBP officers and agriculture specialists with a certain proficiency level in a foreign language. Additionally, it identifies those officers and agriculture specialists who (1) have received Spanish instruction through its academy, and (2) speak Spanish in certain field office locations. ICE. ICE, through its foreign language award program in its Office of Investigations, has developed a list that identifies certain agents with a certain proficiency level in a foreign language. For example, the list includes an agent with a certain proficiency level in Jamaican Patois at the New York field office. Further, although it’s Offices of Detention and Removal Operations and Intelligence do not have foreign language award programs, they have developed lists in their individual offices of employees with foreign language capabilities. For example, one list identifies an intelligence research specialist at the Office of Intelligence in Miami who speaks Haitian-Creole, but does not include his proficiency level. Across all three components, while certain offices have developed lists of staff with foreign language capabilities, component officials told us that their knowledge of foreign language capabilities is generally obtained in an ad hoc manner. For example, at each of the seven locations we visited, Coast Guard, CBP, and ICE officials told us that they generally do not use the lists described above to obtain knowledge of their colleagues’ foreign language capabilities, but rather have knowledge of their colleagues’ foreign language capabilities through their current or past interactions. For example, according to ICE intelligence analysts, existing foreign language capabilities in ICE’s Office of Intelligence are not systematically identified in the lists, but the specialists are aware of colleagues who have proficiencies in Spanish, French, Portuguese, and Haitian-Creole. Component officials stated that the inability to identify all existing capabilities may result in intelligence information potentially not being collected, properly translated, or analyzed in its proper context for additional foreign languages and thus affect the timeliness and accuracy of information. Moreover, they said that this information may be vital in tactical and operational intelligence to direct law enforcement operations and develop investigative leads. Coast Guard, CBP, and ICE staff at each of the seven locations we visited generally agreed that more detailed information on existing capabilities could help them to better manage their resources. These officials told us that while Spanish language proficiency may be identified as an existing capability, it may not always be available and generally the levels of proficiencies vary. For example, according to one ICE immigration enforcement agent in the Office of Detention and Removal Operation’s fugitive operation program, he speaks Spanish but is not proficient. He told us that there have been cases in which he needed assistance from an agent who was proficient in Spanish to converse with Spanish speakers. As the agent was not proficient in Spanish, he said he did not apprehend certain individuals because he was unable to verify their immigration status because he could not communicate with them. Although DHS has some knowledge of its existing capabilities in certain components and offices, conducting an assessment of foreign language capabilities consistent with strategic workforce planning—that is, collecting data in a systematic manner that includes all of DHS’s existing foreign language capabilities—would better position DHS to manage its resources. DHS, including the Coast Guard, CBP, and ICE, has not taken actions to identify potential foreign language shortfalls. Moreover, DHS’s Human Capital Strategic Plan does not include details on assessing potential shortfalls, as called for by best strategic workforce planning practices. DHS officials in OCHCO told us that in response to our review, they had canvassed the components to assess DHS’s foreign language shortfalls and that the components’ response was that they address shortfalls through contracts with foreign language interpreter and translation services. This canvassing was not based on a comprehensive assessment of needs and capabilities, which calls into question the extent to which it could comprehensively identify shortfalls. According to OCHCO officials, OCHCO plans to conduct a review and realignment of the DHS Human Capital Strategic Plan, and officials said that the plan will include more specific direction to the components on workforce planning guidance. We also found that the Coast Guard, CBP, and ICE have not taken actions to identify foreign language shortfalls. According to component officials, they face foreign language capability shortfalls that affect their ability to meet their missions. At the Coast Guard, CBP, and ICE locations we visited, 238 of over 430 staff we interviewed identified ways that foreign language shortfalls can increase the potential for miscommunication, affect the ability to develop criminal cases and support criminal charges, increase the risk of loss or delay of intelligence, and can have a negative impact on officer safety. For example, according to the Border Patrol Academy’s Spanish Language Program officials, as part of the Spanish language training, a video is shown of an actual incident in which a Texas law enforcement officer begins interviewing four Spanish-speaking individuals during a routine traffic stop. The video was recorded by the law enforcement officer’s dashboard video camera. In the video, the four suspects exit the car and begin conversing in Spanish among each other while the officer appears to have difficulty understanding what the individuals are saying. Seconds later, the four individuals attacked the officer, took his gun, and shot the officer to death. As another example, an ICE special agent told us that in the course of conducting a drug bust in 1991, he had been accidentally shot by a fellow agent because of, among other things, foreign language miscommunications. According to the agent and other sources familiar with the incident, he was working as the principal undercover agent in a drug sting operation in Newark, New Jersey. At the time of the incident, prior to the formation of DHS, he was working as a U.S. Customs Service agent. The undercover operation involved meeting and communicating in Spanish with two Colombian drug dealers as part of a cocaine bust. According to the agent, there were up to 18 other federal agents involved in the operation, at least two of whom were fluent in Spanish. Further, agents were videotaping and monitoring the conversation between the federal agent and the drug dealers from a nearby command post. However, the agent told us that none of the law enforcement officers in the command post who were covertly monitoring his dialogue with the drug dealers spoke or understood Spanish. The agent stated that as a result, law enforcement officers were signaled to rush in prematurely to make the arrests. In the chaos that ensued, the agent was accidentally shot by a fellow agent and paralyzed from the chest down. According to the agent, as well as other agents familiar with the incident, had there been Spanish- speaking officers in the command post to interpret the audio transmissions from the agent, the accidental shooting may have been avoided. By conducting an assessment of needs and capabilities, and using the results of these assessments to identify shortfalls, DHS can be better positioned to take action to mitigate these shortfalls, which will help to ensure the safety of its officers and agents as they fulfill the department’s mission. DHS has established a variety of foreign language programs; however, officials stated that they have not addressed the extent to which these programs address existing shortfalls. According to DHS officials in OCHCO, DHS’s foreign language programs are managed at the component level and are based on component operational capabilities and mission requirements. The components have established programs and activities, which consist of foreign language training, proficiency testing, foreign language award programs, contract services, and interagency agreements. Table 4 summarizes the extent to which foreign language programs and activities have been established in Coast Guard, CBP, and ICE select offices. According to DHS officials in OCHCO, decisions on whether to establish programs and activities to develop foreign language capabilities are left to the discretion of individual components and are based on component operational capabilities and mission requirements. As shown in table 4, foreign language programs and activities varied across DHS and within select DHS components. For example, four of the seven component offices we reviewed maintain Spanish language training programs, and some of these offices require that officers complete Spanish language training before they are assigned to their duty stations. The five types of foreign language programs and activities used within and among the components are language training, proficiency testing, foreign language award programs, contract services, and interagency agreements. Spanish language training. Before officers can be assigned to their duty stations, some components require that they complete a Spanish language training program. Specifically, U.S. Border Patrol requires the completion of an 8-week task-based Spanish language training program. The Office of Field Operations has a 6-week basic Spanish training program requirement, and the Office of Air and Marine requires 6 weeks of task-based Spanish language training. The Office of Detention and Removal Operations has a requirement for a 6-week basic Spanish training program. These programs are designed to provide officers with a basic Spanish language competency. U.S. Border Patrol and Office of Air and Marine agents and officers are required to attend Spanish language training only if they do not pass a Spanish language proficiency exam. Foreign language proficiency tests. Several proficiency tests are used by different components, and the type of test that is used depends on the foreign language for which proficiency is being assessed. The Coast Guard’s proficiency test is produced by the Defense Language Institute and consists of a set of tests that include an oral interview to assess language proficiency in the skills of reading and listening. ICE’s proficiency test consists of an oral interview for all foreign languages assessed, while CBP uses a combination of both oral and automated telephone tests for assessing proficiency in similar foreign languages, such as the Spanish language. Contract services. Contract services consist of contracts held by individual components and offices for interpreter and translation services. The use of language contract services depends on the unique requirements of the operation in individual offices. For example, the U.S. Border Patrol provides funding for translation services and the Coast Guard contracts annually for Haitian-Creole interpreter services. Select components utilize over-the-phone language contract services, while other components also utilize in-person translation and transcription contract services. Additionally, DHS’s U.S. Citizenship and Immigration Services operates and manages the Language Services Section, comprising both intermittent and full-time language specialists who may provide assistance to some offices in CBP and ICE in certain cases. Interagency agreements. Interagency agreements consist of individual component offices establishing professional relationships with other federal, state, and local law enforcement agencies as a result of carrying out joint operations. Additionally, these agreements vary by component, office, and location, and may often depend on the extent to which other agencies in those areas work closely with DHS. The interagency cooperation we observed during our site visits largely occurs on an ad hoc basis. For example, component officials in Miami told us that local, state, and federal government officials provide translation assistance as needed without any written agreement between agencies. Foreign language award programs. The foreign language award program consists of certain DHS personnel voluntarily identified as being proficient in an authorized foreign language and meeting program requirements, including certain proficiency levels and minimum usage requirements. As shown in table 5, the usage requirement and award payment vary by component. Specifically, the Coast Guard does not have a usage requirement, while CBP and ICE offices require that certain DHS staff use the language 10 percent of the time, or 208 hours each year. The usage requirement for special interest languages is only twice per 6-month increment. Further, Coast Guard interpreters receive up to $200 each month and linguists receive up to $300 each month, while CBP and ICE employees can receive up to 5 percent of basic pay as an award payment. Components have established some language award programs as an incentive for certain DHS employees to develop foreign language capabilities to address components’ language needs. According to ICE officials, statutory language providing authorization for their foreign language award program is limited to those employees who meet a statutory definition of the term law enforcement officer. For example, with respect to the law enforcement officer definition, intelligence research specialists in ICE have not been determined to meet such definition and be eligible to receive award payments for their use of foreign language skills. In addition, component requirements may also affect eligibility for foreign language awards. For example, according to CBP, although U.S. Border Patrol agents are law enforcement officers, Spanish language skills are a requirement for employment of that position, therefore agents do no receive award payments for their use of Spanish or other foreign language skills. Additionally, CBP told us that it is not opposed to assessing its options regarding foreign language needs. While DHS components have a variety of foreign language programs and activities, DHS has not assessed the extent to which these programs and activities address potential shortfalls at the department or component levels. OPM’s strategic workforce planning guidance recommends that agencies assess potential shortfalls in human capital resources, such as foreign language capability, by comparing needs against available skills. OCHO officials told us that DHS has not performed a department-level assessment of the extent to which the programs address potential shortfalls because DHS has delegated responsibility for foreign language programs to the components. However, we found that the Coast Guard, CBP, and ICE also have not assessed the extent to which their programs address potential shortfalls. Although foreign language programs and activities at select components contribute to the development of DHS’s foreign language capabilities, DHS’s ability to use them to address potential foreign language shortfalls varies. For example, the foreign language training programs generally do not include languages other than Spanish, nor do they include various Spanish dialects. According to several Coast Guard, CBP, and ICE officials we spoke with, their foreign language programs and activities were established to develop specific foreign language capabilities, primarily in Spanish. Officers we interviewed noted that that these programs and activities generally do not account for variations of the Spanish language spoken in certain regions of the country, which can potentially have fatal consequences, particularly during undercover operations. Further, according to agents we interviewed in Puerto Rico, both the agents and criminals understand that the Spanish phrase “tumbarlo” in the Caribbean region means “kill him,” while agents from the southern border understand this phrase to mean “arrest him.” As another example of the vital role of foreign language proficiency in certain operations, we were told that foreign language capabilities in one operation enabled an agent to infiltrate a prolific drug trafficking organization. While working in a long-term drug smuggling investigation, the agent came under suspicion by members of the trafficking organization. However, the agent was able to utilize Spanish language skills and dialect to avoid being discovered as a U.S. federal agent and escape execution by his captors. Further, in certain cases, according to component officials, the programs and activities are not well suited for some operational needs. CBP and ICE officials noted that although their foreign language training programs and activities are used for the Spanish language, they maintain a language service contract for an over-the-phone, 24-hour translation service in over 150 languages. However, according to component officials we spoke with in the Coast Guard, CBP, and ICE, this resource is limited depending on the unique requirements of operations within and among components. Specifically, the component officials said that this resource is limited because of (1) the time it can take to obtain an interpreter over the phone, (2) difficulty in relying on over-the-phone interpretation while conducting operations at sea, and (3) the inability to use an interpreter who is over the phone for an on-the-spot discussion and resolution of an issue or problem encountered in the field. For example, officials stated that during an operation in which they entered a house suspected of harboring individuals trafficked into the United States, an officer intercepted a phone call from one of the individuals who was involved in this illegal activity who spoke Russian. In other operations, according to intelligence analysts we spoke with, it is difficult or impossible to develop detainees’ trust during phone interviews to obtain intelligence. For example, according to all of the agents we interviewed, potential informants are difficult or impossible to recruit when the discussion is occurring through a third- party interpreter on the phone. Because the components have not assessed the programs and activities, they have not addressed this limitation. Furthermore, these programs and activities are managed by individual components or offices within components. According to several Coast Guard, CBP, and ICE officials, they manage their foreign language programs and activities as they did prior to the formation of DHS. At the department level and within the components, many of the officials we spoke with were generally unaware of the foreign language programs or activities maintained by other DHS components. In addition, many of the Coast Guard, CBP, and ICE officials at all seven locations we visited stated that they relied on colleagues from current or past interactions to interpret or identify other foreign language resources. Given this decentralization, conducting an assessment of the extent to which its program and activities address shortfalls could strengthen DHS’s ability to manage its foreign language programs and activities and to adjust them, if necessary, to address shortfalls. Since its formation in the aftermath of the September 11, 2001, terrorist attacks, DHS and three of its largest components—the Coast Guard, CBP, and ICE—have performed vital roles in carrying out a range of law enforcement and intelligence activities to help protect the United States against potential terrorist actions and other threats. To achieve its mission, it is important that DHS and its components manage their human capital resources in a way that ensures that fundamental capabilities, such as foreign language capabilities, are available when needed. Foreign language capabilities are especially important for DHS, as its employees frequently encounter foreign languages while carrying out their daily responsibilities. While DHS has taken limited actions to assess its foreign language needs and capabilities, it has not conducted a comprehensive assessment of the department’s and its components’ foreign language needs and capabilities nor has it fully identified potential shortfalls. Further, although the Coast Guard, CBP, and ICE have a variety of foreign language programs and activities in place, they have not assessed the extent to which the programs and activities they have established address foreign language shortfalls. As a result, DHS lacks reasonable assurance that it’s varied and decentralized foreign language programs and activities are meeting its needs. We have recommended that other federal agencies, including the Departments of Defense and State and the FBI, take actions to help ensure that their foreign language capabilities are available when needed. Similar opportunities exist for DHS to help ensure that foreign language capabilities are available to effectively communicate and overcome language barriers encountered during critical operations, such as interdicting the transport of contraband and other illegal activities. Comprehensively assessing its foreign language needs and capabilities and identifying any potential shortfalls and the extent to which its programs and activities are addressing these shortfalls would better position DHS to ensure that foreign language capabilities are available when needed. Further, considering the important role foreign language plays in DHS’s missions, incorporating the results of foreign language assessments into the department’s future strategic and workforce planning documents would help DHS ensure that it addresses its current and future foreign language needs. To help ensure that DHS can identify its foreign language capabilities needed and pursue strategies that will help its workforce effectively communicate to achieve agency goals, we recommend that the Secretary of Homeland Security (1) comprehensively assess DHS’s foreign language needs and capabilities and identify potential shortfalls, (2) assess the extent to which existing foreign language programs and activities address foreign language shortfalls, and (3) ensure that the results of these foreign language assessments are incorporated into the department’s future strategic and workforce planning documents. We provided a draft of our report to the Secretary of Homeland Security for review and comment on June 9, 2010. On June 14, 2010, DHS provided written comments, which are reprinted in appendix IV. In commenting on our report, DHS stated that it concurred with our recommendations and identified actions planned or under way to implement them. Regarding our first recommendation that DHS comprehensively assess its foreign language needs and capabilities and identify potential shortfalls, DHS concurred and stated that OCHCO will work with the Office of Civil Rights and Civil Liberties to establish the DHS Joint Task Force consisting of those components and offices that have language needs in order to identify requirements and assess the necessary skills. DHS also concurred with our second recommendation to assess the extent to which existing foreign language programs and activities address foreign language shortfalls, and stated that the DHS Joint Task Force will work to recommend a system for the department to track, monitor, record, and report language capabilities. DHS also stated that with respect to the foreign language skills required by DHS personnel stationed abroad, this task force will include the Office of International Affairs. DHS also agreed with our third recommendation to ensure that the results of these foreign language assessments are incorporated into the department’s future strategic and workforce planning documents and stated that OCHCO will ensure that DHS-wide language policies and processes are incorporated into the DHS Human Capital Strategic Plan. DHS also provided written technical comments, which we considered and incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Homeland Security and interested congressional committees. The report also will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9627 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. To address our first and second objectives, we reviewed operations in three Department of Homeland Security (DHS) components and seven offices. We selected the U.S. Coast Guard, U.S. Customs and Border Protection (CBP), and Immigration and Customs Enforcement (ICE) because they constitute a broad representation of program areas whose missions include law enforcement and intelligence responsibilities. We selected the Coast Guard’s Foreign Language Program Office; CBP’s Office of U.S. Border Patrol, Office of Air and Marine, and Office of Field Operations; and ICE’s Office of Detention and Removal Operations, Office of Investigations, and Office of Intelligence to ensure that we had a mix of different program sizes and a broad representation of program areas whose missions include law enforcement and intelligence responsibilities and are most likely to involve foreign nationals, foreign language documents, or both. We then selected a nonprobability sample of seven site visit locations—San Antonio and Laredo, Texas; Artesia, New Mexico; New York and Buffalo, New York; Miami, Florida; and San Juan, Puerto Rico—to identify and observe foreign language use at select DHS components. We selected these locations based on geographic regions, border locations, and language use. Although the results are not projectable, they provided us with valuable insights. During our site visits, we spoke to over 430 DHS staff in law enforcement and intelligence units, and observed the use of foreign language skills where foreign language capabilities are deemed vital to meeting mission requirements, including the following: We interviewed Coast Guard officials at the Command, Sector, District, and Stations and Intelligence and Enforcement representatives of the Coast Guard in New York, Miami, and San Juan. During an operational boat ride tour at Station Miami Beach, we observed an encounter involving Spanish-speaking individuals. We spoke with officials in ICE’s Detention, Fugitive, Intelligence and Criminal Alien Operations units. We also observed interviewing and processing at five detention facilities and processing centers. We interviewed ICE intelligence research specialists who were sent to the southern border and Mexico City in support of operations, including Armas Cruzadas, in 2009, and obtained information on arrests, seizures, and significant events. We also interviewed an intelligence research specialists who provided foreign language support in Spanish for ICE’s 2009 gang surge operation and an analyst who was sent to Haiti to conduct law enforcement training in the Haitian-Creole language, and obtained copies of reports needing translations. We spoke with ICE officials in the Drug Smuggling, Human Trafficking and Smuggling, Worksite Enforcement, and Immigration and Customs Fraud units. We interviewed four Title III wiretap transcription monitor linguists in San Antonio and observed a targeted area of responsibility for surveillance composed of Spanish-speaking populations that select DHS components encounter while carrying out operations in New York City. We observed “Operation-Cooperation” at the Lincoln Juarez Bridge Number 2 at the Service Port of Entry in Laredo. The operation consisted of CBP border patrol agents and customs officers conducting outbound vehicle inspections to confiscate illegal weapons and cash. We also observed interviews and inspections, fingerprinting, and the permit/visa issuance process. We observed passenger processing and interviews conducted by a passenger analysis unit and tactical group (PAU/TAG) and passenger Enforcement Roving and Counter-Terrorist Response (CTR) teams at the Miami and San Juan international airports. We observed the Border Patrol Laredo Sector’s initial processing of illegal immigrants at the Laredo North Station by 14 Border Patrol interns (refereed to as interns by the U.S. Border Patrol while receiving post-academy training in the field). In addition, we interviewed members of the Border Patrol’s International Liaison Unit, Border Intelligence Center, and Joint Terrorism Task Force in Laredo, Buffalo, Miami, and San Juan. We also interviewed officials in the Swanton Sector located on the northern border and reviewed documents on its Québécois French training initiatives. During our site visit to Artesia, New Mexico we observed the Spanish Language Program at U.S. Border Patrol’s Law Enforcement Academy at the Federal Law Enforcement Training Center. While conducting this site visit, we interviewed officers in training and program officials and examined documentation, such as training manuals, lessons, and videos on foreign language training development. We also examined documentation on foreign language needs and capabilities, including DHS’s strategic plans for fiscal years 2004 through 2008 and 2008 through 2013, human capital plans for fiscal years 2004 through 2008 and 2009 through 2013, and Quadrennial Homeland Security Review Report and Work Force Planning Guidance to determine whether DHS’s plans provide details on how to address actual workforce needs, such as foreign language capabilities. Further, we interviewed knowledgeable officials in DHS’s Office of the Chief Human Capital Officer and conducted over 430 interviews with component officials (component officials consist of Coast Guard members; Border Patrol agents; Air and Marine agents and officers; CBP officers and agriculture specialists; and ICE officers, special agents, and intelligence research specialists) for all the locations we visited to determine the extent to which they have assessed their foreign language needs and existing capabilities and identified any potential shortfalls. We also interviewed these component officials and other DHS staff to determine the extent to which they have foreign language programs in place to develop operational foreign language capabilities. We compared DHS activities to our and the Office of Personnel Management’s (OPM) workforce planning criteria. We also examined and analyzed relevant studies and observed the use of foreign language proficiencies in a number of law enforcement operations. Finally, we considered our prior work on human capital strategic workforce planning related to foreign language needs and capabilities for the Departments of Defense and State and the Federal Bureau of Investigation. We conducted this performance audit from December 2008 through June 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We and OPM have developed guidance for managing human capital and developing workforce planning strategies. Strategic workforce planning helps ensure that an organization has staff with the necessary skills and competencies to accomplish its strategic goals. Since 2001, we have reported strategic human capital management as an area with a high risk of vulnerability to fraud, waste, abuse, and mismanagement. In January 2009, we reported that while progress has been made in the last few years to address human capital challenges, ample opportunities exist for agencies to improve in several areas. For example, we reported that making sure that strategic human capital planning is integrated with broader organizational strategic planning is critical to ensuring that agencies have the talent and skill mix they need to address their current and emerging human capital challenges. We have also issued various policy statements and guidance reinforcing the importance of sound human capital management and workforce planning. Our 2004 human capital guidance states that the success of the workforce planning process that an agency uses can be judged by its results—how well it helps the agency attain its mission and strategic goals—not by the type of process used. Our 2002 strategic human capital guidance also highlights eight critical success factors in strategic human capital management, including making data-driven human capital decisions and targeted investments in people. To make data-driven human capital decisions, the guidance states that staffing decisions, including needs assessments and deployment decisions, should be basedon valid and reliable data. Furthermore, the guidance states that to make targeted investments in people, organizations should clearly docum methodology underlying their human capital approaches. We have identified these factors, among others, as critical to managing human capital approaches that facilitate sustained workforce contributions. Our 2004 guidance on strategic workforce planning outlines key principles for effective workforce planning. These principles include (1) involving management, employees, and other stakeholders in the workforce planning process; (2) determining critical skills and competencies needed to achieve results; (3) developing workforce strategies to address shortfalls and the deployment of staff; (4) building the capabilities needed to address administrative and other requirements important in supporting workforce strategies; and (5) evaluating and monitoring human capital goals. OPM has also issued strategic workforce planning guidance to help agencies manage their human capital resources more strategically. The guidance recommends that agencies analyze their workforce needs, conduct competency assessments and analysis, and compare workforce needs against available skills. Along with OPM, we have encouraged agencies to consider all available flexibilities under current authorities in pursuing solutions to long- standing human capital problems. In addition, our guidance outlines strategies for deploying staff in the face of finite resources. Federal agencies use the foreign language proficiency scale established by the federal Interagency Language Roundtable to rank an individual’s language skills. The scale has six levels from 0 to 5—with 5 being the most proficient—for assessing an individual’s ability to speak, read, listen, and write in another language. Proficiency requirements vary by agency and position but tend to congregate at the second and third levels of the scale. (See table 6.) In addition to the contact named above, William W. Crocker III, Assistant Director; Yvette Gutierrez-Thomas, Analyst-In-Charge; Stephen L. Caldwell; Wendy Dye; Rachel Beers; Virginia Chanley; Geoffrey R. Hamilton; Lara Kaskie; Adam Vogt; Robert Lowthian; Candice Wright; Mona Nichols Blake; and Minty Abraham made key contributions to this report. Language Access: Selected Agencies Can Improve Services to Limited English Proficient Persons. GAO-10-91. Washington, D.C.: April 26, 2010. Iraq: Iraqi Refugees and Special Immigrant Visa Holders Face Challenges Resettling in the United States and Obtaining U.S. Government Employment. GAO-10-274. Washington, D.C.: March 9, 2010. State Department: Challenges Facing the Bureau of Diplomatic Security. GAO-10-290T. Washington, D.C.: December 9, 2009. State Department: Diplomatic Security’s Recent Growth Warrants Strategic Review. GAO-10-156. Washington, D.C.: November 12, 2009. Department of State: Persistent Staffing and Foreign Language Gaps Compromise Diplomatic Readiness. GAO-09-1046T. Washington, D.C.: September 24, 2009. Department of State: Comprehensive Plan Needed to Address Persistent Foreign Language Shortfalls. GAO-09-955. Washington, D.C.: September 17, 2009. Department of State: Additional Steps Needed to Address Continuing Staffing and Experience Gaps at Hardship Posts. GAO-09-874. Washington, D.C.: September 17, 2009. Military Training: DOD Needs a Strategic Plan and Better Inventory and Requirements Data to Guide Development of Language Skills and Regional Proficiency. GAO-09-568. Washington, D.C.: June 19, 2009. Defense Management: Preliminary Observations on DOD’s Plans for Developing Language and Cultural Awareness Capabilities. GAO-09-176R. Washington, D.C.: November 25, 2008. State Department: Staffing and Foreign Language Shortfalls Persist Despite Initiatives to Address Gaps. GAO-07-1154T. Washington, D.C.: August 1, 2007. U.S. Public Diplomacy: Strategic Planning Efforts Have Improved, but Agencies Face Significant Implementation Challenges. GAO-07-795T. Washington, D.C.: April 26, 2007. Department of State: Staffing and Foreign Language Shortfalls Persist Despite Initiatives to Address Gaps. GAO-06-894. Washington, D.C.: August 4, 2006. Overseas Staffing: Rightsizing Approaches Slowly Taking Hold but More Action Needed to Coordinate and Carry Out Efforts. GAO-06-737. Washington, D.C.: June 30, 2006. U.S. Public Diplomacy: State Department Efforts to Engage Muslim Audiences Lack Certain Communication Elements and Face Significant Challenges. GAO-06-535. Washington, D.C.: May 3, 2006. Border Security: Strengthened Visa Process Would Benefit from Improvements in Staffing and Information Sharing. GAO-05-859. Washington, D.C.: September 13, 2005. State Department: Targets for Hiring, Filling Vacancies Overseas Being Met, but Gaps Remain in Hard-to-Learn Languages. GAO-04-139. Washington, D.C.: November 19, 2003. Foreign Affairs: Effective Stewardship of Resources Essential to Efficient Operations at State Department, USAID. GAO-03-1009T. Washington, D.C.: September 4, 2003. State Department: Staffing Shortfalls and Ineffective Assignment System Compromise Diplomatic Readiness at Hardship Posts. GAO-02-626. Washington, D.C.: June 18, 2002. Foreign Languages: Workforce Planning Could Help Address Staffing and Proficiency Shortfalls. GAO-02-514T. Washington, D.C.: March 12, 2002. Foreign Languages: Human Capital Approach Needed to Correct Staffing and Proficiency Shortfalls. GAO-02-375. Washington, D.C.: January 31, 2002. | The Department of Homeland Security (DHS) has a variety of responsibilities that utilize foreign language capabilities, including investigating transnational criminal activity and staffing ports of entry into the United States. GAO was asked to study foreign language capabilities at DHS. GAO's analysis focused on actions taken by DHS in three of its largest components--the U.S. Coast Guard, U.S. Customs and Border Protection (CBP), and Immigration and Customs Enforcement (ICE). Specifically, this report addresses the extent to which DHS has (1) assessed its foreign language needs and existing capabilities and identified any potential shortfalls and (2) developed foreign language programs and activities to address potential foreign language shortfalls. GAO analyzed DHS documentation on foreign language capabilities, interviewed DHS officials, and assessed workforce planning in three components that were selected to ensure broad representation of law enforcement and intelligence operations. While the results are not projectable, they provide valuable insights. DHS has taken limited actions to assess its foreign language needs and existing capabilities and to identify potential shortfalls. GAO and the Office of Personnel Management have developed strategic workforce guidance that recommends, among other things, that agencies (1) assess workforce needs, such as foreign language needs; (2) assess current competency skills; and (3) compare workforce needs against available skills. However, DHS has done little at the department level, and individual components' approaches to addressing foreign language needs and capabilities and assessing potential shortfalls have not been comprehensive. Specifically: (1) DHS has no systematic method for assessing its foreign language needs and does not address foreign language needs in its Human Capital Strategic Plan. DHS components' efforts to assess foreign language needs vary. For example, the Coast Guard has conducted multiple assessments, CBP's assessments have primarily focused on Spanish language needs, and ICE has not conducted any assessments. By conducting a comprehensive assessment, DHS would be better positioned to capture information on all of its needsand could use this information to inform future strategic planning. (2) DHS has no systematic method for assessing its existing foreign language capabilities and has not conducted a comprehensive capabilities assessment. DHS components have developed various lists of foreign language capable staff that are available in some offices, primarily those that include a foreign language award program for qualified employees. Conducting an assessment of all of its capabilities would better position DHS to manage its resources. (3) DHS and its components have not taken actions to identify potential foreign language shortfalls. DHS officials stated that shortfalls can affect mission goals and officer safety. By using the results of needs and capabilities assessments to identify foreign language shortfalls, DHS would be better positioned to develop actions to mitigate shortfalls, execute its various missions that involve foreign language speakers, and enhance the safety of its officers and agents. DHS and its components have established a variety of foreign language programs and activities but have not assessed the extent to which they address potential shortfalls. Coast Guard, CBP, and ICE have established foreign language programs and activities, which include foreign language training and award payments. These programs and activities vary, as does DHS's ability to use them to address shortfalls. For example, foreign language training programs generally do not include languages other than Spanish, and DHS officials were generally unaware of the foreign language programs in DHS's components. Given this variation and decentralization, conducting a comprehensive assessment of the extent to which its programs and activities address shortfalls could strengthen DHS's ability to manage its foreign language programs and activities and to adjust them, if necessary. GAO recommends that DHS comprehensively assess its foreign language needs and capabilities and identify potential shortfalls, assess the extent to which existing foreign language programs are addressing foreign language shortfalls, and ensure that these assessments are incorporated into future strategic planning. DHS generally concurs with the recommendations. |
The United States began providing limited assistance to the Soviet Union in December 1990 to support the reform effort and then increased assistance after the Soviet Union dissolved in December 1991. In October 1992, the Freedom for Russia and Emerging Eurasian Democracies and Open Markets Support Act of 1992 (P.L. 102-511), commonly known as the Freedom Support Act, was enacted. It further increased assistance to the former Soviet Union and established a multiagency approach for providing assistance. It also called for the designation of a coordinator within the Department of State whose responsibilities would include designing an assistance and economic strategy and ensuring program and policy coordination among federal agencies in carrying out the act’s policies. The Freedom Support Act sets forth the broad policy outline for helping former Soviet Union countries implement both political and economic reforms. It also authorized a bilateral assistance program that is being implemented primarily by USAID. In January 1994, the State Department Coordinator approved the first overall U.S. assistance strategy for the former Soviet Union, and in May 1994, the Coordinator approved the strategy specifically for Russia. This strategy has three core objectives: (1) help the transition to a market economy, (2) support the transition to a democratic political system, and (3) ease the human cost associated with the transition. As of December 1994, USAID had obligated $1.4 billion and spent $539 million for programs and projects in Russia since fiscal year 1990. (See app. I.) USAID’s assistance to Russia has focused on 13 sectors, such as health care and housing, that support the three U.S. objectives. Hundreds of U.S. contractors and grantees are responsible for implementing individual projects in the 13 sectors. The 10 projects we reviewed showed mixed results in meeting their objectives. Two projects—coal industry restructuring and housing sector reform—met or exceeded their objectives. Five projects—voucher privatization, officer resettlement, small business development, district heating, and agribusiness partnerships—met some but not other objectives. Three projects—health care, commercial real estate, and environmental policy—met few or none of their objectives. Three of the 10 projects we examined were contributing significantly to systemic reform—that is, they were making fundamental structural changes. These projects were effecting change because they had sustainability—benefits that extend beyond the project’s life span—built into their design and they focused on issues on a national or regional scale. The housing sector project helped Russian ministries and agencies implement 38 laws, regulations, and decrees to reform housing policies and practices. The Urban Institute, which implemented the project, also completed a series of pilot projects related to housing maintenance, mortgage lending, rent reform, and property rights. Many of the activities affect the entire country or could be replicated in additional cities. The project helped create new institutions, strengthened existing ones, and distributed procedural guides and manuals to local governments as a way to sustain the reforms. The contract for implementing the voucher privatization project called for Deloitte & Touche to establish 35 voucher clearing centers in cities throughout Russia. This project encountered some difficulty in meeting its initial time frames and establishing all the centers, but overall the project was successful. The active centers handled 70 million voucher transactions as part of Russia’s unprecedented privatization program, and over half the centers participate in ongoing capital market activities. Partners in Economic Reform (PIER), which implemented the coal restructuring project, has facilitated movement toward the transformation of the entire coal sector. PIER helped build a consensus for reform among the Russian government, mine labor unions, and mine management. It was also instrumental in facilitating a World Bank review that could lead to a restructuring loan. To sustain the project’s contribution to systemic reform, PIER helped establish long-term business relations between the Russian and U.S. coal industries, formed a consortium of U.S. coal-related business, and is involved in social safety net and new job-creation activities. Finally, PIER helped facilitate the sale of U.S. equipment in Russia. Seven projects we examined did not contribute significantly to systemic reform because they either did not meet their objectives, were narrowly focused, or lacked sustainability. The University of Alaska sought to help develop small businesses by establishing American Russian Centers in four cities across the Russian Far East. The centers’ purpose was to help train entrepreneurs, help form new businesses, and build lasting business ties between the region and the United States. To become self-supporting after USAID stopped funding, the centers planned to develop partnerships with counterpart institutions. However, the centers have so far been unable to attract alternative funding. CH2M Hill International Services, Inc., signed a contract in September 1993 for an environmental policy development and technology project. The contractor had difficulty filling critical staff positions in Russia and providing required work plans for the activities. Of the work plans due in November 1994, one was approved in May 1995, while the other was still being revised as of June 1995. The project to provide health care financing training in the United States to Russian health professionals was implemented by Partners for International Education and Training (PIET), several training institutions, and USAID. Although PIET and the institutions provided the training as required, the Russian participants did not have the authority, expertise, or resources to make systemic changes. In addition, changes in Russia’s health reform plans have made the training irrelevant. A commercial real estate project, implemented by International Business & Technical Consultants, Inc. (IBTCI), was intended to create a standard approach for increasing the availability of commercial real estate in six Russian cities. The project design called for a pilot project/roll-out concept, but IBTCI did not roll out the pilot in any of the cities and used a different approach in each city. Also, Russian officials said the project had little or no effect on the availability of commercial real estate in their cities. The district heating project, which USAID recommended we review, was implemented by RCG/Haggler Bailly and met its objectives primarily by conducting energy audits and training as well as providing energy efficiency equipment to two Russian cities. However, as of February 1995, some of the equipment in one of the cities had not been installed. Russian officials said the equipment may never be installed because Russian authorities never certified it. USAID had not monitored the use of the equipment or followed up on the impact of the studies produced for the project. Consequently, we found no indication that the project contributed to systemic reform in the energy efficiency area. The agribusiness partnerships project, implemented by Tri Valley Growers, helped two U.S. companies establish joint ventures in two Russian cities. Although the involvement of U.S. companies increased the probability that the business ventures would be sustained, the limited scope of the partnerships makes it unlikely that they will have a significant effect on reforming Russia’s agricultural sector. USAID has discontinued the entire agribusiness partnerships project in Russia. The Russian officer resettlement pilot project was not intended to be sustainable after its completion, but instead was motivated by the United States’ desire to encourage the withdrawal of Russian troops from the Baltic countries. The $6-million pilot project objective was to construct 450 housing units to resettle demobilized officers by July 1994. As of February 1995, 422 units were either occupied or available for use, so in that sense the project was successful. Successful projects (1) had strong support and involvement at all levels of the Russian government, (2) had a long-term physical presence by U.S. contractors in Russia, and (3) were designed to achieve maximum results by supporting Russian initiatives, having a broad scope, and including elements that made them sustainable. A critical element to a project’s success was the degree to which Russian officials were committed to reform in the particular sector. Russians at both the federal and local levels demonstrated a strong commitment to the projects that were contributing to systemic reform. The Russian government also provided financial or in-kind support, and Russian nationals held leadership roles in the projects. For example, PIER’s approach to implementing the coal project included working with officials in the Ministry of Fuel and Energy, Fund of Social Guarantees, and the federal coal company; academic institutions; oblast’ and city officials in the two targeted regions; local mine management; and representatives of two labor unions. Russian nationals served as codirectors, and PIER staff received free apartments and office space. To accomplish the housing sector reform project, the Urban Institute worked closely with officials in the Ministries of Finance and Economy and the State Committee on Architecture and Construction at the federal level, the Moscow city government, various maintenance firms, banks, and grass-roots condominium associations. Although office space in Moscow is expensive and scarce, the Institute received free office space. In addition, Russian nationals played a key role on the Institute’s staff. In contrast, many less successful projects lacked the buy-in of Russians at either the local or federal level and had little Russian involvement or contribution. For example, the State Committee of the Russian Federation for the Management of State Property (GKI), Russia’s federal agency overseeing the privatization effort, was instrumental in designing the voucher clearing and commercial real estate projects. However, in some cities, local officials were not involved in designing the projects and had little interest in them; as a result, these projects were not fully successful. The officer resettlement project established housing in several cities, but not in Novosibirsk, where city officials reneged on a previous administration’s commitment to provide needed infrastructure support. Because officials at the federal and oblast’ levels were not involved in the initial agreements, they had no authority to require the new city administration to fulfill the contract, nor were they willing to provide additional funding for the project. The district heating project was not completed in Yekaterinburg because local officials did not allow monitoring equipment to be installed. They said the proper Russian authorities had not certified the equipment. The successful projects usually had long-term advisers living in Russia, which enabled the advisers to build trust, learn about local conditions and plan accordingly, monitor progress closely, and correct problems as they occurred. In addition, successful projects involved contractors that had appropriate experience to carry out the project. For example, the Urban Institute has had two long-term advisers living in Moscow since 1992 who maintained close contact with Russians involved in housing reforms. PIER’s project director had lived in Moscow for 3 years. Other members of its American staff had lived in Kemerovo and Vorkuta, the key cities of the major coal mining oblasts, since 1993 and 1994, respectively. The two field staff have years of experience as coal mine engineers. Russian officials at all levels (1) praised PIER’s staff; (2) described PIER’s assistance as timely, well-targeted, and beneficial; and (3) wanted the project to continue and expand. Contractors implementing many of the less successful projects did not have staff living in the Russian cities being assisted. For example, neither IBTCI nor RCG/Haggler Bailly had permanently assigned American staff in the cities being assisted. IBTCI’s consultants would fly in, make rapid diagnoses, deal with problems quickly, and then leave. Many U.S. officials, Russians, and contractors said that relying on “fly-through” consultants rather than permanent staff was an ineffective approach. Successful projects—the housing reform, voucher privatization, and coal industry restructuring—were designed to be sustainable, have a widespread effect, and support existing initiatives. Each project supported ongoing Russian efforts at widespread reform, considered local conditions, and contained elements to sustain the effects of the project beyond its life span. In contrast, some projects were not designed to maximize their potential impact. For example, the project design required RCG/Haggler Bailly to provide energy efficiency equipment and audits but did not include methods to replicate the project in other cities, or extend monitoring efforts to determine how the equipment or studies were used. The USAID Inspector General reported that other projects did not include any follow-up steps to ensure that the assistance provided was used. In addition, projects focusing on health care training and commercial real estate leasing did not consider local needs and conditions and thus had limited impact. Several projects did not adequately identify outcomes or measurable results. For example, the Tri Valley Growers’ contract with USAID did not stipulate how many agribusiness partnerships were to be established. The design of the coal project also did not adequately identify outcomes or measurable results, but PIER developed an effective project nonetheless. The USAID Inspector General found similar problems when reviewing many projects in the region. It is widely acknowledged that the Russian people themselves will determine the ultimate success or failure of political and economic reforms. Without their involvement and commitment to change, outside assistance will have a limited effect. For example, the support and involvement of Russian federal agencies, such as GKI in the privatization effort and the ministries related to housing, ensure that projects in those sectors are likely to have a wide and sustained effect. The coal project’s impact depends on Russia’s commitment to restructure the coal industry. In several sectors, a Russian commitment to reform remains elusive. Powerful factions in the Russian legislative branch strongly oppose land reform, and the Ministry of Health has not demonstrated a commitment to health care reform. This lack of commitment raises concerns that projects in the agriculture and health sectors will not have widespread benefits. USAID is now working with the Ministry of Environmental Protection and Natural Resources, but the level of support from other important federal ministries, including the Ministry of Finance, is still questionable. Other domestic conditions will also influence a project’s success. Russia’s commitment to breaking up monopolies and its ability to attract capital for modernizing outdated equipment, restructuring existing state enterprises, and starting new businesses will affect the pace and scope of Russia’s transformation to a market economy. Moreover, projects such as introducing mortgage lending will depend on macroeconomic policy and land reforms. Russia is counting on foreign capital to help move the transition process forward, but such factors as the unstable economic situation, a poor and uncertain tax structure, an undeveloped financial market infrastructure, and an increased crime rate make foreign investors hesitant to invest. USAID responded quickly to assist Russia in undertaking its political and economic reforms, as called for in the Freedom Support Act. However, to respond quickly, USAID made certain exceptions to its normal procedures and processes. Although USAID provided a quick and flexible response to a fluid, unpredictable situation, we identified several management problems in addition to design problems that occurred, in part, because of the quick response. USAID officials agreed that management problems occurred, but they said the risks associated with not responding quickly were high. The large size of USAID’s program, the vast geographic area receiving assistance, and staff limitations have prevented adequate monitoring in some cases. We found that USAID officials were unaware of positive and negative aspects of the projects implemented by IBTCI, RCG/Haggler Bailly, and PIER. USAID officials had not visited some projects, and USAID did not have representatives located outside Moscow. USAID expected its Russian staff to conduct field monitoring, but the Russian nationals lacked the necessary training. USAID officials said they considered but rejected the idea of establishing field offices outside Moscow. Without adequate staff, USAID relied mainly on contractors’ written and oral reports to monitor the projects, but some contractors did not report all problems. The USAID Inspector General also found shortcomings in the reporting process: contractors were not required to report on their progress toward specific objectives or indicators. Moreover, USAID did not enforce some of its reporting requirements. For example, Deloitte & Touche did not provide the required lists of equipment purchased with USAID funds and brought into the country, and USAID did not enforce the requirement. Although the State Department allowed USAID/Moscow to increase U.S. direct-hire personnel and personal services contractors from 27 in fiscal year 1993 to 66 in fiscal year 1995, USAID officials said even more staff were needed to adequately monitor the program. However, State would not allow the USAID mission to grow further because, among other reasons, the USAID assistance program is scheduled to end by the end of the decade. In some cases, USAID had not determined the relative success or failure of projects so that it could apply lessons learned to other efforts. It did not conduct the required periodic assessments/evaluations of the coal and agribusiness projects. The omnibus contracts do not require an evaluation of the individual tasks, but instead evaluations are to be done at the end of the contracts, according to USAID officials. The omnibus contracts for USAID’s private sector initiatives alone have obligated approximately $200 million and are not scheduled to terminate until 1996, too late to apply lessons learned. In addition, an evaluation that was conducted was not accurate. A contractor evaluated the district heating project in June 1993 and gave it high marks. Our 1995 review of the project found major shortcomings, such as equipment still in boxes after being delivered in 1993, even though the evaluation report said the equipment had been installed and was being used. The USAID Inspector General also found that evaluations had not been conducted and that the quality and impact of some project evaluations were questionable. The devolution of management and monitoring responsibility from USAID’s Washington office to a rapidly growing Moscow office has not been smooth, and several problems have developed as a result. First, as USAID’s Moscow office assumed more management responsibility, contractors had to begin dealing with another layer of management review. This caused delays and confusion among some contractors. Second, there were tensions between the Washington and Moscow offices because of differences regarding program implementation. For example, the offices disagreed about which reformers and Russian government agencies to work with. Third, the USAID/Moscow office lacked some essential documents to enable officials to carry out their duties. We found that key contract and financial documents were not available in Moscow, a problem also reported by the USAID Inspector General. The State Department Coordinator opposed giving greater authority to USAID/Moscow because he believed USAID/Washington needed to maintain a more prominent role. He said that because assistance to Russia is an important foreign policy issue, key decisions should not be delegated to the field. State and USAID/Washington officials said they needed quick access to important project data for reporting purposes, but quick access to data could not be ensured when projects were managed by the USAID/Moscow office. USAID has not yet developed a good management information system for its Russia program. The USAID Inspector General reported that USAID lacked an information system with baseline data, targets, time frames, and quantifiable indicators by which to measure program progress and results. USAID’s Bureau for Europe and the New Independent States was exempted from a new agencywide management system because the program was intended to be short term and regional rather than long term and country-specific. USAID officials said the pressure to provide assistance quickly meant forgoing the traditional project design process, which included developing progress indicators. Part of USAID’s assistance strategies was to focus on areas where reformers were willing to make changes. USAID believed this would help create a synergy that could stimulate the overall impact of the projects. Some contractors were not aware of each others’ activities. USAID’s management information system did not list contractors by region, and USAID sometimes did not tell new contractors about other contractors’ activities. In some cities, contractors contacted each other on their own and started coordinating their efforts. However, this was being done on an ad hoc basis. In Vladivostok and Yekaterinburg, U.S. Consuls General facilitated contractor coordination. The USAID Inspector General found that many projects with similar goals were not linked to one another. Poor coordination reduced the opportunity to achieve synergy and targeted impact and gave some Russians the impression U.S. assistance was fragmented and uncoordinated. We recommend that the USAID Administrator focus assistance efforts on projects that (1) will contribute to systemic reforms; (2) are designed to be sustainable; (3) are supported by all levels of Russian government; and (4) whenever possible, use American contractors with an in-country presence. In commenting on a draft of this report, USAID said the three projects that we had deemed to have not met their primary objectives did produce some positive benefits or it was too early to tell the impact the projects would have. USAID also said it was aware of the problems that have occurred and has taken steps to correct them or terminate activities that could not be fixed. USAID pointed to a new computerized monitoring system that is expected to produce its first report in November 1995. USAID agreed with our recommendation regarding the focus of its assistance projects and said it was taking steps to implement it. In addition, USAID said it was taking corrective action to address the management problems we identified. However, it stressed that its assistance has had a positive impact and occurred in a difficult operating environment. USAID indicated that it had made progress in setting up its own monitoring, reporting, and evaluation system. It should be pointed out, however, that in November 1994, the USAID Inspector General reported that the system was still far from able to measure program results. USAID said that our report would have provided a more balanced and accurate view of the systemic impact and sustainability of a project’s activities if we had considered the activity in the context of the whole program. USAID stated that, in nearly every case, the individual projects we focused on were part of a larger project or program that would have substantial impact on reforming Russia’s economy. USAID is correct that the projects we examined were usually one component of a larger sector program; however, USAID is incorrect in its assertion that we evaluated projects in isolation and without considering the context of the whole program. The overriding objective of USAID’s program in Russia is to contribute to reforming both the political and economic systems. This is also the objective of the assistance program for each sector and, with few exceptions, of each project that supports a sector program. Our examination focused on the individual building blocks that support sector programs and ultimately support the reform effort in Russia. In some cases, we found that the individual building blocks will not contribute to systemic reform in the sector or in Russia overall. Even though this does not mean that an entire program, of which a less-than-successful project is a part, will fail in its systemic reform objective, it does mean that an unsuccessful project is not contributing to a program’s success. We also disagree with USAID’s assertion that significant systemic reform has resulted from USAID activities in all sectors. For example, the agribusiness partnerships project, including components reviewed by the USAID Inspector General, comprises most of the USAID funding going to the sector but is not expected to contribute significantly to systemic reforms. Only a limited degree of systemic reform has occurred in other sectors as well, including the health care and the environmental sectors. We believe that a sector evaluation, although useful in its own right, would not have allowed us to draw conclusions about the role and contribution of individual projects. USAID provided other comments that we incorporated into the report where appropriate. The full text of USAID’s comments is reprinted in appendix IV. We judgmentally selected 10 individual projects from 6 sectors to review as case studies. We selected projects based on their geographic distribution, focusing on regions where several projects were concentrated. We also considered the level of obligations and expenditures; the type of assistance provided (e.g., training, technical assistance, and product delivery); and the type of contracting vehicle (e.g., cooperative agreements, grants, and contracts). We generally did not review projects examined by the USAID Inspector General, although we analyzed its work to assess whether common themes emerged. (See app. II for a list of the 10 projects we studied and USAID Inspector General reports we reviewed.) We analyzed USAID and project documents and interviewed USAID and other U.S. government officials, U.S. contractors, Russian counterparts, and beneficiaries. We visited project sites in Western Russia, Siberia, and the Russian Far East in November 1994 and February 1995. Appendix III provides a detailed analysis of the 10 projects in our case study. We conducted our work from September 1994 to April 1995 in accordance with generally accepted government auditing standards. As agreed with your offices, unless you publicly announce this report’s contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies to the Secretary of State, the Administrator of USAID, and other interested congressional committees. Copies will also be made available to others on request. Please contact me at (202) 512-4128 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix V. In his January 1995 annual report, the State Department Coordinator reported about $2.9 billion in obligations and $1.8 billion in expenditures for Russia through December 1994. (See table I.1.) Between fiscal years 1990 and 1994, the U.S. Agency for International Development (USAID) allocated assistance to the New Independent States (NIS) as a region. During that time, most projects spanned different countries and USAID did not track how much money was obligated or expended by country. Thus, USAID country attributions are estimates and should be treated as such. In fiscal year 1995, USAID began keeping country accounts. We reviewed 10 projects in depth as part of our review. In addition, we reviewed various reports that USAID’s Inspector General has issued on management issues and projects in Central and East Europe and the former Soviet Union. The 10 projects and the USAID Inspector General reports we reviewed are listed in table II.1. Coal project/Partners in Economic Reform Small business development/American Russian Center, University of Alaska Environmental policy & technology/ CH2M Hill International Services Health care training/Partners for International Education and Training Commercial real estate/International Business & Technical Consultants, Inc. The following are the USAID Inspector General reports GAO reviewed. Audit of the Bureau for Europe’s Technical Assistance Contracts (Report No. 8-180-93-05, June 30, 1993). Audit of the ENI Bureau’s Monitoring, Reporting and Evaluation System (Report No. 8-000-95-002, Nov. 28, 1994). Audit of ENI’s Strategy for Managing Its Privatization and Restructuring Activities in Russia (Report No. 8-118-95-009, Mar. 17, 1995). Audit of Selected Privatization and Restructuring Activities in Russia (Report No. 8-118-95-007, Mar. 10, 1995). Audit of the Reestablishment of Vaccine Production Activity Under the New Independent States Health Care Improvement Project No. 110-0004 (Report No. 8-110-94-004, Feb. 25, 1994). Audit of the Medical Partnerships in Russia and Health Information Clearing House Activities Under the New Independent States Health Care Improvement Project (Report No. 8-110-94-005, Feb. 28, 1994). Audit of the Distribution of Emergency Medical Supplies to the New Independent States Under USAID Cooperative Agreement With the People-to-People Health Foundation “Project Hope” (Report No. 8-110-94-006, Mar. 17, 1994). Audit of the Vulnerable Groups Assistance Program in Russia Under Project No. 8-110-0001 (Report No. 8-110-93-08, Sept. 24, 1993). Audit of Activities to Improve Crop Storage Systems in the New Independent States (Report No. 8-110-94-014, Aug. 31, 1994). Audit of the ENI Bureau’s Cooperative Agreement With World Learning, Inc., for Support to Non-Governmental Organizations in the New Independent States of the Former Soviet Union (Report No. 8-110-95-008, Mar. 10, 1995). Audit of the Department of Commerce’s Special American Business Internship Training Program in the New Independent States (Report No. 8-110-93-10, Sept. 24, 1993). Audit of the Department of Commerce’s Consortia of American Businesses in the New Independent States Program (Report No. 8-110-93-11, Sept. 24, 1993). Audit of the Nuclear Regulatory Commission’s Technical Assistance Activities in Russia (Report No. 8-118-94-012, June 28, 1994). Audit of the Department of Energy’s Nuclear Safety Technical Assistance Activities in Russia and Ukraine (Report No. 8-110-95-001, Oct. 7, 1994). The following provides a detailed analysis of each project that we reviewed. We selected one project in each of the following areas: (1) housing policy reform, (2) voucher privatization, (3) coal, (4) small business development, (5) environmental policy and technology, (6) heating, (7) health care, (8) commercial real estate, (9) agribusiness partnerships, and (10) officer resettlement. Each summary contains information on the problems in the sector, USAID’s project objectives for the selected contract, and the project approach used by USAID or the contractor. We also provide our assessment of the contractor’s performance, the impact on systemic reform, and USAID’s management of the contract. The projects are presented on the basis of their capacity to contribute to systemic reform. The Urban Institute housing project was successful. It supported reforms already underway, used an experienced contractor with staff in country, installed local nationals in high-level positions, focused its efforts on both the federal and local levels, and contained elements that made it sustainable. Therefore, this project will likely have sustained benefits as legislation is implemented and new Russian institutions expand the pilot projects into other areas. Russia’s housing sector has been beset by housing shortages, production inefficiencies, maintenance problems, and deterioration. This situation occurred primarily because the state had a monopoly on housing. For example, it (1) used standardized apartment buildings constructed by state-owned companies, (2) controlled apartment construction and maintenance, (3) financed all state housing from state assets, (4) almost totally subsidized housing and maintenance, (5) guaranteed low-cost housing, and (6) distributed housing through waiting lists. In addition, because the Soviet government had not raised rents since 1928, rents covered less than 5 percent of the cost of operating the apartments in 1990. The problem was exacerbated when the Russian Federation government stopped paying for maintenance cost of apartments and they fell into disrepair. In addition, the Federation devolved the housing assets and responsibilities to municipalities as a way of relieving itself of the burdens associated with managing the apartments. Russia initiated housing reforms in 1991 when it allowed its citizens to privatize their apartments at little to no cost. This action set the stage for establishing a private housing sector. USAID signed its first contract for housing sector reform with the Urban Institute in September 1992 for $5.8 million. This 2-year contract required the Institute to provide draft legislative and financial advisers to help Russia develop market-oriented housing programs and legislation. Other Institute advisers were expected to conduct pilot projects on (1) rent reforms and housing allowances for the poor, (2) privatized housing maintenance, (3) condominium associations, and (4) mortgage lending. It was also expected to provide targeted training to those implementing reforms and develop local institutions to sustain and expand the reforms. Specific objectives and milestones were incorporated into the project design. A USAID team that included Institute representatives met with their Russians counterparts in early 1992 to determine their reform priorities. From 12 to 15 meetings at both the national and municipal levels in Moscow were needed to clarify Russian reform priorities. To help focus Russian priorities, the team used a “menu” of reforms based on experience in housing reforms in Hungary and developing countries, and then focused on one or two priorities to demonstrate results quickly and build confidence. The Russian priorities were formalized through agreements signed in March 1992 with USAID, the City of Moscow, the State Committee on Architecture and Construction, the Ministry of Finance, and the Ministry of Economy. The team sought joint agreements with the three ministries agencies to (1) ensure it would not become captive to any one ministry, (2) ensure broad-based agreement on reform priorities, and (3) reduce governmental impediments to reform. The Russian counterparts showed their support for the project by providing the Institute with free office space, which is highly unusual due to the scarcity of office space in Moscow. The team’s strategy was to work at the national level to help draft legislation that would shape and codify reforms. In addition, it planned a series of local demonstration projects to determine the effectiveness of the designs in the Russian context. The team augmented these efforts by providing training in Russia and the United States. A key strategy was to take advantage of the Russian reforms already underway and try to create “win-win” situations for the government and its citizens. The Institute’s staffing policies were also important to its approach. It provided two long-term resident advisers, including the Program Director, who were located in Moscow. The Director said using advisers who were permanently located in Russia rather than “fly-through” consultants helped establish trust with their Russian counterparts and enabled them to respond immediately. The Institute also employed five Russian housing experts. Short-term U.S. advisers were used on an as-needed basis. The Director said that using local Russians in key positions was critical to establishing trust with the Institute’s Russian counterparts. The large Russian staff also was less expensive than U.S. consultants. The Institute achieved its objectives of helping to develop housing legislation. According to the Russian Federation Housing Director, the Institute’s assistance was critical in drafting the 38 laws, decrees, and regulations that have been implemented. These included laws and regulations on property rights, housing finance, rent reform, housing allowances for the poor, privatized maintenance, condominiums, and mortgage lending. The Institute is now the government’s principal housing adviser. The Institute also achieved its objectives of establishing pilot projects in four areas: rent reform and housing allowances for the poor, privatized maintenance, condominiums, and mortgage lending. The Institute helped the City of Moscow develop a program that would raise rents over a 5-year period until they covered all the costs of operating the apartments. To reduce resistance to rent increases, it tied maintenance improvements to the increases so citizens would see an immediate improvement in their housing conditions. In 1994, the Federation initiated a national 5-year program to increase tenant payments to cover the full operating costs. The Institute also helped the Federation structure a program in which the municipalities began providing housing allowances to the poor. The Institute helped introduce competitive private maintenance for municipal housing. It conducted training sessions, organized the competition to select private firms, and conducted a study tour to the United States so officials could see private maintenance activities. In March 1993, three private maintenance firms assumed management of 2,000 apartments in Moscow, and in October 1993, Moscow’s mayor extended the program to all areas of the city. By 1994, over 60,000 apartments were under privatized maintenance, far surpassing the project’s goal of 2,000. The Institute’s goal for the condominium pilot was to lay the legal and procedural groundwork by 1994. However, it surpassed this goal and helped to create 24 functioning condominium associations in Ryazan’. The regulations it helped develop were instrumental in registering the first condominium association in Moscow. To address mortgage-lending problems, the Institute developed mortgage-lending facilities at several banks; limited lending has begun. For example, the Institute helped Mosbusinessbank, Russia’s third largest commercial bank, to make home mortgage loans and provided assistance in all phases of operations, including legal documentation, underwriting, loan servicing, mortgage loan instrument development, and risk management. The Institute then expanded its work to eight other banks and provided the necessary materials to other banks to expand and sustain mortgage lending. However, hyperinflation has precluded lending to most Russians. The Institute’s critical assistance helped transform Russian priorities into workable legislation and pilot projects. Although the Russians are responsible for the pace of reforms, the Institute has helped effect systemic changes in Russia’s housing sector. It helped pass far-ranging laws that have codified reforms. The program to raise rents and provide subsidies for the poor, which is being implemented across the country, is a fundamental change for the government and its citizens. The project has a strong sustainability component. Over the next several years, it plans to institutionalize the reforms by expanding the number of demonstration projects and developing private maintenance organizations, condominium associations, and mortgage banks. In addition, it created procedures manuals, explanatory guides, and other necessary documentation on implementing rent increases, beginning privatized maintenance, creating condominium associations, and developing mortgage lending. The Institute has distributed more than 25,000 of these documents, mostly to local governments. The project has won high praise from USAID and Russian officials. The USAID Mission Director in Moscow called the project one of the most successful ones he had ever seen. A USAID official in Washington said that, for the money, no USAID project has had more macroeconomic impact. The Russian Federation Housing Director noted the Urban Institute’s tremendous influence on the government, and Russian citizens working in maintenance, condominium associations, and mortgage lending also praised the project. Despite the program’s progress, most Russians have yet to benefit from the reforms. This is because the reforms are relatively recent, are tremendously complex, and face opposition by antireformists; they are also being implemented in a country with no tradition of market-based decisions. Private land ownership rights are still generally uncertain, housing and construction mortgages are generally unavailable, additional laws and regulations are needed, and most apartment buildings are still maintained by state organizations. In addition, factors beyond the housing sector, such as macroeconomic and political instability, slow the transformation to a fully developed privatized housing sector. USAID successfully managed the contract. It determined the Russians’ reform priorities, incorporated these into its reform plan, and listed these in its contract. USAID selected a contractor with experience in both the sector and region and is effectively monitoring the reforms through regular contacts with the contractor and Russian counterparts. Both USAID/Washington and USAID/Moscow agreed on the housing strategy. USAID also had the contractor develop measurable goals in its annual work plan. USAID then measured the contractor’s progress by comparing its task orders to the deliverables. As part of Russia’s privatization effort, Deloitte & Touche established a national system of centers to process millions of vouchers that Russians received and used in the privatization process. Overall, the Deloitte & Touche voucher privatization project was successful, with a few exceptions. The project focused on national reforms, but some areas had lower Russian participation than expected. Deloitte & Touche kept USAID and the State Committee of the Russian Federation for the Management of State Property (GKI) informed of project progress but did not meet some of its reporting requirements. Deloitte & Touche met its amended objective of setting up 30 centers, but many were underused. Several factors contributed to the overall success of the project. The Russian GKI helped focus assistance efforts and identified problems when USAID had minimal field presence. Further, the omnibus contract system allowed the contractor to institute a rapid roll-out as well as adjust the scope of work when warranted. In addition, using existing Russian agencies and using staff and equipment for follow-on activities increased the project’s effects and sustainability. Because the state controlled Russian enterprises, which were generally large and monopolistic, the private sector was virtually nonexistent. The legal and regulatory framework to create the new system was not in place; few citizens had entrepreneurial experience or exposure to western management, accounting, and marketing concepts; and no capital market infrastructure existed. In August 1992, President Yeltsin announced plans to privatize Russia’s large and medium-size state-owned enterprises. Within weeks, distribution of privatization vouchers began, with each Russian citizen eligible to receive one voucher. The sale of the enterprises was expected to reduce the need for massive state subsidies, begin to reduce inefficiencies, and eventually lead to higher productivity and innovation as shareholders demand profits. Voucher privatization was the initial step in the overall privatization process and was used to transfer ownership from the state to private individuals. Unlike the approaches used in some Central European countries, Russia chose to privatize enterprises before restructuring them.The process is therefore not complete: restructuring must still take place before the enterprises can function in a market economy. This may be difficult because management and workers received a majority of shares and can resist taking the painful steps necessary for restructuring. The voucher-clearing centers allowed individual Russians and investment fund managers to more easily buy shares via electronic transfers in enterprises located in remote areas. Without the centers, people would have had to physically transport vouchers to other parts of the country. There was also a fear that regions would not let outsiders, including foreigners, buy shares in highly visible enterprises, thereby allowing insiders and local bureaucrats to control the process. The specific objectives of the project required Deloitte & Touche to establish 35 functioning centers in various Russian cities to verify, process, and cancel voucher receipts. The project was carried out under two separate contracts at a total cost of $4.1 million. The initial contract (as amended) required the contractor to establish 20 centers by the end of 1993, and a task order under the omnibus contract required 15 more before March 1994. This would give citizens enough time to process their vouchers before the privatization program ended in July 1994. USAID and GKI, the Russian agency overseeing the national privatization effort, hoped that many of the centers would develop into institutions, such as registrars and depositories, in the capital market infrastructure. The number of vouchers the centers were to process was not defined. USAID worked closely with GKI on project design, which called for Deloitte & Touche to develop 4 pilot centers and then establish 16 more after successfully setting up the pilot sites. To provide broader geographic coverage, USAID and GKI decided to extend the project and have Deloitte & Touche set up 15 more centers. Consultants from another USAID contractor, the Harvard Institute for International Development, worked with GKI to design and monitor the project. Deloitte & Touche established a permanent office in Moscow in June 1990 and opened a separate office for this project in early 1993. It worked closely with GKI in Moscow and GKI’s local offices in various Russian cities to identify appropriate cities for the centers and suitable partners. Deloitte & Touche then imported computer equipment, established accounting systems, and installed the software and telecommunications systems needed to facilitate voucher transactions. Teams of Deloitte & Touche staff then traveled to the cities to train center staff, install and test the equipment, and test the software and telecommunications systems. The contractor hired Russians to assist with these efforts and usually supplemented the work of Russian agencies, typically banks, already working in related fields. Under the first contract, Deloitte established all 20 centers before its deadline. Under the second contract, USAID and GKI reduced the number of centers from 15 to 10 and extended the deadline by 3-1/2 months because of implementation delays. The delays took place because of problems with equipment procurement and Russian government customs clearance; difficulties locating viable agencies to act as centers; and problems at the local level. For example, some centers collapsed when their leadership changed or chose not to participate on a national scale for local political reasons. Both parts of the project were completed under budget. Of the 30 centers Deloitte & Touche set up, only 23 were used, and many of these experienced relatively little activity. The lack of use was attributed to delays in setting up some centers; limited public awareness (centers were not responsible for advertising their services); limited local interest in voucher auctions in other areas; Russian reluctance to use electronic transfers; and a lack of compatibility between the project goals and individual center goals. Deloitte & Touche was responsive to GKI requests for project changes. The task order was revised once it became clear that all 15 centers would not be needed. In some cases, Deloitte & Touche went beyond the required tasks at GKI’s request. Deloitte & Touche generally kept USAID and GKI informed with monthly reports on progress and problems. However, some reports were not filed as required, and Deloitte & Touche did not provide an adequate inventory of the equipment it procured, which would have ensured the accountability and tracking as required by the company’s contracts. The project is considered a success although not cost-effective. A functioning national system was created in a short time, and according to GKI, it handled over 70 million vouchers, nearly half the vouchers processed in the program. People were able to buy shares in enterprises located in remote areas. GKI noted that over half of the centers have evolved into institutions that are now active in capital market activities, such as share registrars and depositories. Our visits to three centers verified that center staff and equipment are being used in follow-on activities. These centers intend to become self-financing on a fee-for-service basis when USAID assistance ends. The scope of voucher privatization in Russia was unprecedented in scale and speed. According to Russian and U.S. officials, USAID’s support of GKI and other Russian institutions involved in privatization activities was crucial to this phase of the program. The Russian Privatization Center estimated that 14,000 large and medium-sized enterprises were privatized by July 1994; they employed over 60 percent of the industrial workforce. Nevertheless, the overall effect of the privatization program on Russia has yet to be determined. Enterprise restructuring has only begun, monopolies still exist, and inadequate tax legislation makes foreigners reluctant to provide badly needed capital investment. USAID used an omnibus contract to plan and implement projects quickly. It allowed USAID to respond quickly to emerging needs through task orders that included specific objectives for narrowly focused, short-term projects. USAID officials said this gave them the flexibility to change directions quickly, move money into areas and projects making rapid progress in reform, and adjust projects to meet emerging needs. Omnibus contracts also allowed USAID to obligate a large amount of funding. Deloitte & Touche has an omnibus contract for $41.5 million, with subcontractors performing some of the work. However, the individual task orders lacked an evaluation requirement. USAID officials said an evaluation is planned only at the end of the omnibus contract, which ends June 30, 1996. We identified several USAID management problems. For example, USAID did not design the project with quantifiable indicators to measure progress. Although Deloitte & Touche set up 30 voucher clearing centers, the project design did not specify the amount of activity expected at each center or on a national scale. USAID/Moscow had limited information on the project, such as key documents on Deloitte & Touche’s initial contract or task order, and accurate financial data for the project. USAID officials said key documents had not been transferred to Moscow when management moved to Moscow from Washington. Also, the physical distances involved, the geographic distribution of project activities, and the lack of staff to visit the sites left USAID uninformed about some Deloitte & Touche activities. USAID officials said they relied heavily on GKI and Harvard consultants to help monitor the project. Finally, USAID did not require Deloitte & Touche to provide adequate inventory data on the $1.1-million worth of equipment purchased with USAID funds. Not having this data hindered USAID from using surplus equipment in other projects as planned. The coal project is achieving its primary objective of facilitating the restructuring of Russia’s coal industry and is opening the industry to American technology and companies. The Russian beneficiaries expressed appreciation for the assistance and found it useful. Due to the size and cost of the restructuring, however, the Russian government must complete the effort. If the World Bank approves a $500-million sector loan, this project will have played a key role in restructuring the coal sector. Although the project is meeting its objective, USAID did not provide adequate oversight, did not fully understand the beneficiaries’ needs or opinions about USAID assistance, and did not know the extent to which the project was meeting its goals. Coal is an important component of Russia’s economy. However, Russia’s coal sector suffers from declines in production and serious environmental and safety liabilities, in large part because of the centralized structures, subsidized pricing, Soviet-style management, and state allocation system. To solve these problems, the coal industry needs to be restructured. The process of restructuring is both a problem and a solution because it creates new challenges. The major areas that need to be addressed in restructuring Russia’s coal industry and transitioning from a centrally planned to a market economy include reducing the numbers of mines and miners as well as the amounts of coal produced and government subsidies. In addition, the coal monopoly must be broken up, mines must be privatized, and new relationships and agreements must be established between management and labor. Efforts to restructure the coal industry are complicated because the state-subsidized coal mines provide many social services and may be the only source of energy or employment in the areas where they are located. Coal industry restructuring will take a heavy toll on miners and their families as the industry streamlines its operations, mines are closed, and miners lose their jobs. These same miners, who could lose their jobs as a result of the Yeltsin reform program, were instrumental in bringing Yeltsin to power in 1991. The mining community in Russia is still considered a politically powerful force. President Yeltsin took a major step toward restructuring Russia’s coal industry in July 1993 when he freed coal prices. Since that time, the industry has made some progress. For example, approximately 72,000 of the 914,000 coal miners and others employed by the coal sector in 1992 left the industry between January 1993 and June 1994. In addition, coal production decreased by approximately 41 percent from 1988 to 1994. The government also reduced subsidies to the coal industry by approximately 20 percent in real terms (i.e., taking inflation into account) between 1993 and 1994. Finally, the Russian coal industry closed 2 of its approximately 273 mines, was in the process of closing 14 more mines in 1994, and is preparing to close 40 more in the future. As part of USAID’s broader effort to assist Russia’s energy sector, USAID signed a cooperative agreement with the Partners In Economic Reform (PIER), a private, nonprofit organization established to assist the coal industries in Russia, Ukraine, and Kazakstan. USAID signed the agreement with PIER for $6.9 million in June 1992 and has increased funding since then to $8 million. The project’s main objective is to facilitate the transformation of the centrally planned and controlled coal mining industry to an industry capable of operating in a market economy. The cooperative agreement did not specify any measurable goals or deliverables. In 1989, U.S. coal representatives visited some of the coal regions of the Soviet Union where miners were starting to form independent unions, and between 1989 and 1991, groups of independent miners met with leaders of the U.S. coal industry labor and management in the United States. In June 1991, a memorandum of understanding, pertaining to continued assistance, was signed by U.S. and Russian coal industry representatives. During 1991, circumstances changed drastically. Boris Yeltsin was elected President of Russia in June, the communists mounted a failed coup attempt in August, and the Soviet Union dissolved in December. These changes opened the door for a broad U.S. technical assistance program in Russia. As part of this effort, the State Department announced the coal project in a January 23, 1992, press release and signed a cooperative agreement on June 25, 1992. The coal project gained early acceptance because PIER targeted the project at a problem (i.e., coal industry restructuring) that the Russians had already identified and were struggling to address. In addition, PIER established good working relationships. For example, PIER established a coordinating office in Moscow and cooperation and development centers in the Russian cities of Kemerovo and Vorkuta. An American director heads the coordination office, and an American director and a Russian codirector head the cooperation and development centers. In addition, because the American staff lived in Russia, they were able to develop and maintain long-term relationships with the Russian government, coal industry management, and labor unions. The Russians further demonstrated their support by donating rare office space for the coordination offices and cooperation and development centers and donating apartments for the American directors in Kemerovo and Vorkuta. PIER staff worked closely with representatives from the Russian government, coal industry management, and labor to (1) reduce the number of mines and miners, (2) develop new sources of employment in coal-producing regions to absorb displaced laborers, and (3) develop a social safety net for those miners left unemployed during and after the transition. PIER has cultivated cooperative efforts between government, management, and labor to address problems associated with coal industry restructuring. In addition to these efforts, PIER staff has helped build commercial links between the Russian and American coal and coal-related industries. PIER made progress in facilitating the restructuring of Russia’s coal industry and opening the Russian market to U.S. mining technology. Specifically, PIER worked closely with the World Bank to evaluate Russia’s coal industry and develop a restructuring plan; conducted detailed studies of employment, unemployment, and social programs; government subsidies; labor demand; a social safety net, job creation, and mine planning; and enterprise debt in the Russian coal industry; established a coal-bed methane recovery center; mediated discussions between U.S. and Russian officials on equipment certification in an effort to open the Russian market to U.S. high-tech safety equipment; established a program to facilitate U.S. private sector investment; hosted approximately 150 representatives of the Russian government and coal industry in the United States where they participated in meetings and negotiations with World Bank officials, training seminars, and meetings with U.S. coal industry representatives; provided training material and conducted seminars in Russia concerning mine safety, labor-management relations, mining and mine management in a market economy, and small business development in Russia; implemented a transition assistance program focused on developing a viable social safety net and creating new jobs; and provided $200,000 worth of U.S.-manufactured mine health and safety equipment to Russian miners. The Russian beneficiaries (i.e., government, labor, and management) we contacted in Russia stated that the coal project was well-targeted, timely, and beneficial. In addition, they all wanted the project to be continued and expanded. PIER has made several contributions to systemic reform. One of the clearest contributions is its work in facilitating a $500-million World Bank loan. By providing U.S. coal industry experts, PIER facilitated the World Bank’s work in Russia; contributed extensive analysis of the coal industry’s problems; built consensus among Russian government, management, and labor representatives; and brought representative Russian delegations to the United States to negotiate with the World Bank. The World Bank acknowledged PIER in its 1994 report for contributing to the Bank’s work in Russia. PIER helped establish relationships between Russian and U.S. coal mining and equipment-manufacturing firms. According to the beneficiaries, these relationships will help Russia attract capital investors and gain greater access to U.S. expertise and technology so that it can begin to produce coal efficiently and compete in a market economy. PIER also facilitated the sale of millions of dollars of non-USAID-funded U.S. mining equipment in Russia. PIER formed a consortium of U.S. industry representatives to help create a viable private coal industrial sector. The consortium is to assist coal managers and technicians in operating in a market economy, identify ways that private U.S. firms can participate in restructuring the coal industry, establish NIS-U.S. joint ventures, and promote the consortium’s services so it can become self-sustaining. Finally, PIER worked with Russia’s Fund for Social Guarantees to implement a transition assistance program focused on developing a viable social safety net and creating new jobs. PIER also brought in U.S. experts to provide small business education to miners and helped mining communities develop business proposals that can be presented to the Russian-American Enterprise Fund, Russian banks, and other sources for eventual financing. USAID started to implement the coal project before it had established a USAID mission in Moscow; consequently, the USAID project officer in Washington managed the project. Since the coal project was established through a cooperative agreement, without quantifiable indicators, PIER designed and implemented the project without direct oversight and control by USAID. PIER provided the required monthly program performance reports, annual work plans, and annual progress reports to USAID, which then reviewed them. PIER’s staff communicated regularly with USAID and felt they had a good reciprocal working relationship. Despite some success with the project, USAID did not meet its monitoring and evaluation requirements. Although the USAID staff should have regularly monitored the project, they visited the Russian project sites three times between June 1992 and February 1995. Two of the visits occurred after we began our review. USAID officials said a lack of staff prohibited more frequent visits. In addition, USAID did not conduct the annual assessments or midterm evaluation as required and thus lacked an objective basis for evaluating PIER’s activities and accomplishments. This, coupled with a lack of quantifiable indicators, hindered USAID’s ability to independently determine the project’s impact on coal sector restructuring. The University of Alaska met most of its project objectives while encouraging systemic reform, but to date the project has not become self-sustaining. The American staff live in Russia and have built trust with Russian officials and institutions, and Russians support the project with personnel and in-kind contributions. New enterprises are a major source of new jobs for most economies. However, the development of new enterprises in Russia has been hampered by years of central control of the economy, excessive rules and procedures for establishing a business, and the lack of entrepreneurial skills. To help promote the growth of small, private businesses and alleviate unemployment, the United States supported the creation of multipurpose business development centers in several Russian cities. These centers provided training and advisory services to small businesses and worked with local governments to create a hospitable environment for private business growth. USAID’s goal is that the centers eventually be operated by trained Russians on a self-financing, fee-for-service basis. The American Russian Center (ARC), established by the University of Alaska in Anchorage through an USAID cooperative agreement, was one of the first contractors in USAID’s program to establish new businesses. The program’s two phases, conducted over 2 years, cost about $5.1 million. The agreement called for ARC to provide small business training, develop Russian business activities in specific geographic areas, and develop business ties between the Russian Far East and the United States. ARC’s initial objectives were to establish a headquarters at the University of Alaska and two field centers, as well as to train a specific number of people. A subsequent work plan called for ARC to establish two more centers while expanding its program in the two original centers. Specific objectives included increasing (1) the number of Russians trained in modern business methods, (2) the number of viable Russian small businesses, (3) access to both U.S. and Russian technologies, (4) U.S.-Russian business ties through ARC field and business information centers, and (5) U.S. business activity in the Russian Far East. Creating Russian institutions that would be sustainable after USAID assistance ended was also an objective. From its headquarters at the University of Alaska, ARC worked closely with Russian partners to establish business training centers in four Russian cities. In each city, ARC had a local educational or academic institution as a partner. This partnership was reflected in the American and Russian codirectors and staff at each center and in-kind contributions such as free office space from the institutions. ARC’s American staff have had a long-term commitment to Russia. Full-time staff spoke Russian fluently, lived in the cities where the centers were located, and had business experience in the region. They were complemented by short-term American teachers who taught a 1- or 2-month course as well as by itinerant teachers who taught a 1-week or weekend course in one city and then moved to another city. These courses were taught with interpreters. ARC’s core program was an evening course that taught such skills as accounting, marketing, and management that were necessary to write a business plan. This course lasted 1 or 2 months, depending on the center, the time of year, and the targeted clientele. It was supplemented by short seminars in the host cities and extension seminars in outlying cities and was targeted at specific business sectors, such as bankers lending to small businesses. Russians and Americans, both resident and visiting, taught the courses and seminars. The centers also provided business counseling for Russians trying to set up their own small businesses. The training centers charged a relatively low fee for its courses and seminars. Participants who excelled in the training center programs were invited for advanced training in Anchorage. They were selected, in large part, through the business plans they wrote during their core course. In Anchorage, they attended a 5-week course that explored topics from the earlier training in more depth, and toured stores, manufacturing facilities, and offices in the Anchorage area. The 5-week course was followed by 2 weeks of either internships in local small businesses or more extensive business tours tailored to the participants’ interests. ARC successfully fulfilled its first year’s work plan targets, and then received $3 million for a second year’s activities (fiscal year 1994) from USAID after a March 1994 evaluation of the initial $2.1-million project. USAID also stipulated that, in fiscal year 1995, ARC must match USAID’s funding. ARC established business training centers in four Russian cities: Yuzhno-Sakhalinsk, Yakutsk, Khabarovsk, and Magadan. It established the Yuzhno-Sakhalinsk and Yakutsk centers in the fall of 1993 and Khabarovsk and Magadan centers in the fall of 1994. In May 1995, USAID agreed to provide ARC with an additional $1.5 million, even though ARC had not raised any matching funds. Between the fall of 1993 and January 1995, ARC’s Yuzhno-Sakhalinsk and Yakutsk Business Training Centers offered four cycles of evening courses, lasting 1 or 2 months, that trained 211 Russians—thereby exceeding the first year’s work plan goal of 200 participants. The two centers also provided individual business counseling to 300 Russians; the work plan’s goal was 200. In addition, the two centers offered 7 extension seminars to 103 Russians in outlying cities. The training centers in Khabarovsk and Magadan had only recently completed their first evening courses. ARC sponsored 19 technical assistance seminars, meeting the first year’s work plan goal of 15 to 20 seminars. Four seminars on banking drew 180 Russian bankers, and 8 seminars on hair salon management drew 250 women from throughout the Russian Far East. Forty construction managers from Yakutsk participated in training on cold weather construction methods. This seminar led to the government of Yakutsk testing American-manufactured plastic piping to replace its existing steel piping. Between the fall of 1993 and January 1995, 71 Russians completed the advanced business training courses at the University of Alaska. This exceeded the first year’s work plan goal of 50. In total, ARC trained 1,646 Russians in its USAID-financed programs through January 1995. On a more systemic level, ARC developed a database of U.S. and Russian businesses in the Russian Far East and provided assistance or information to U.S. and Russian businesses working throughout the region. ARC generally coordinated its activities with other U.S. government programs located in cities of the Russian Far East, but there were a few exceptions, particularly when contractors worked in separate sectors. For example, in Khabarovsk, where ARC established a center in late 1994, the local American codirector did not know the local environment project director until we visited. ARC officials in Anchorage were, however, working with CH2M Hill staff to link the projects. The ARC project will contribute to systemic reform on a regional basis if it can become financially self-supporting. USAID recognizes that creating small businesses in the region will push the Russian government to be more responsive and further develop the area’s nascent capitalist economy. The centers help Russians who come to Anchorage from their relatively isolated cities to meet each other and develop business contacts with other Russians as well as Americans. By drawing entrepreneurs from cities throughout the Russian Far East, ARC has helped build a network of private small businesses that can generate business for one another and for the region. Once the USAID funding ends, ARC’s partnerships with Russian institutions will be the key ingredient to sustaining its work. The directors of the Russian institutions plan to continue the program. For example, the Rector of the Far Eastern State Transport Academy, ARC’s partner in Khabarovsk, plans to establish a permanent school based on the activities of the project. Kray and oblast’ government officials are also highly supportive of ARC. However, despite the Russians’ desire to continue the program, most of these institutions currently lack the means to support an entire local ARC operation. Further, in an April 1994 evaluation report, USAID questioned whether ARC would be able to support itself. Other donors have not yet stepped forward to replace USAID. According to ARC’s director, the problem lies in the newness of the project and the project’s focus on Alaska and its businesses. The ARC director said the project plans to include more business internships and tours in the rest of the United States. The director believes that this expanded scope will increase ARC funding because large U.S. institutions and enterprises have the funds and business interests in Russia to provide long-term support. If USAID assistance had ended with fiscal year 1994 funds, the U.S. side of ARC would have been curtailed and U.S. personnel in Russia would have been withdrawn. USAID officials played a significant role in designing the project. Because the University of Alaska had no previous USAID contract experience, USAID sent an official from Moscow to Anchorage to help with the project’s design, which has proven to be effective. Under a cooperative agreement, USAID has relatively limited management and monitoring responsibility. ARC provided good progress reports to USAID. USAID/Moscow has adequately managed and monitored the project. USAID/Washington has maintained a duplicate document set so that it can respond to U.S. inquiries. When it was considering further funding for ARC, USAID sent an evaluation team from Moscow to Yakutsk, Yuzhno-Sakhalinsk, Khabarovsk, and Anchorage in March 1994. The team’s report became the basis for USAID’s continued funding of ARC. Within the amendment that provided the fiscal year 1994 funding, USAID included a clause stating that USAID would provide $1.5 million in fiscal year 1995 if ARC raised $1.5 million in matching funds. However, in May 1995, USAID agreed to provide the $1.5 million even though ARC had not raised the matching funds. CH2M Hill is an integral part of USAID’s $35-million environmental policy and technology program. In September 1993, USAID awarded CH2M Hill a contract to serve as the program’s core contractor and provide the technical support for its environmental activities in Russia, Kazakstan, and Ukraine. In April 1994, an initial delivery order was signed to provide support for activities in Novokuznetsk and the Russian Far East. Detailed delivery orders were signed for these activities in September 1994. CH2M Hill also serves as the contractor or subcontractor on various components of the program. Although the environmental policy and technology project is still ongoing, USAID officials said its progress so far has been disappointing. Progress has been slow because CH2M Hill did not fill critical staff positions in Russia in a timely manner, and it relied on staff located in the United States to manage the projects. The expanded scope of the Far East component further contributed to the delay. Further, USAID field staff lacked authority and information to expedite project implementation. The projects’ expected contributions to systemic reform and long-term benefits are not likely to be significant. Severe environmental degradation threatens the physical health and socioeconomic well-being of people throughout Russia and deters economic and political restructuring efforts. Environmental problems range from nuclear safety issues; to pervasive mismanagement of natural resources; to some of the worst air, water, and land pollution problems in the world. The breadth and magnitude of the economic, health, and ecological costs are difficult to quantify, although remediation activities alone are expected to cost billions. Environmental problems are exacerbated by many factors, including inattention to environmental consequences, a lack of economic and political incentives to use resources efficiently, the inability of nongovernmental agencies to participate in environmental decision-making, and the inability of governmental institutions to effectively regulate state-owned monopolies and curb illegal economic activities. Our analysis focused on CH2M Hill’s performance as the core contractor and two projects where it serves as the primary contractor—the Multiple Pollution Source Management project in Novokuznetsk and the Sustainable Natural Resources Management and Biodiversity Protection project in the Russian Far East. Both projects are to run from September 1994 to September 1997. The objectives of the $7.4-million core contract are to coordinate all activities under the core contract, monitor and evaluate the activities and deliverables, and provide support functions as needed. The objectives of the Multiple Pollution Source Management project are to reduce pollution-related health risks and promote environmentally sustainable economic development; improve public health; reduce pollutant emissions from industries and cities; assist industries in restructuring in an environmentally sound and sustainable manner; and strengthen institutions and train individuals to continue improvements initiated during the project. The $13.4-million delivery order included $6.3 million for the Novokuznetsk project along with two other projects. The Sustainable Natural Resources Management project was expanded from a narrowly focused $3-million, 3-year project focused on fire prevention and control to a $16.7-million, 5-year project focused on sustainable forest management and biodiversity protection. This expansion responded to the Gore-Chernomyrdin Commission’s recommendations. Specific project objectives are to promote sustainable forest management in the Khabarovskiy and Primorskiy Territories and protect endangered species and critical habitats in the Khrebet Sikhote-Alin’ mountain region. To address these objectives, the contract specifies 25 tasks for CH2M Hill and multiple subcontractors. USAID approved a $9.4-million delivery order for CH2M Hill to implement and coordinate these activities. CH2M Hill worked mainly with local and oblast’ government officials to design and manage the programs. CH2M Hill consultants spent short periods of time in Russia to design the project proposals and then returned to the United States to complete the project design. Although USAID and CH2M Hill established rapport with local and oblast’ authorities in the affected cities, the Ministry of Environmental Protection and Natural Resources was only involved in the initial selection of project activities and their locations. Subcontractors, U.S. nongovernmental organizations, and other federal agencies helped implement parts of the project. The project approach includes providing technical assistance, demonstration projects, training seminars, and limited commodities. Several components in both projects continued efforts initiated by the U.S. Environmental Protection Agency, World Bank, and the City of Pittsburgh. CH2M Hill staff in Washington manage the project, and a regional director and site managers in Russia handle the day-to-day activities and coordinate with other implementers. CH2M Hill plans to hire and train Russian employees who can eventually manage the activities without assistance from its U.S. office. Project progress to date has been mixed. CH2M Hill met the requirements of its core contract by establishing field offices, monitoring project implementation, and providing support functions to its field staff. Even though it has made some progress toward addressing the Novokuznetsk project objectives, it has been slow to implement the Far East project. CH2M Hill has missed critical milestones for both projects. In Novokuznetsk, CH2M Hill established an air pollution database for the 180 heating plants in the city and developed a strategic plan to address particulate pollution from the heating plants. It also upgraded the city’s air pollution program, trained Russian counterparts in environmental auditing, and completed environmental audits of two large steel mills. CH2M Hill is currently assessing local water monitoring activities and has recommended laboratory improvements to more accurately measure the quality of drinking water. CH2M Hill is working with the Novokuznetsk Development Fund and local government officials to develop a strategic plan. However, CH2M Hill has not provided an acceptable work plan, which was due on November 30, 1994. The current work is based on the delivery order specifications. In the Far East project, CH2M Hill has been even slower getting started and, according to USAID and Russian officials, had produced almost no quantifiable results as of February 1995. Several factors have hindered the project’s implementation, including its increased complexity; the size of the geographic area; and the large numbers of governmental officials, local interest groups, and subcontractors involved. The project covers 2 large regions and will involve at least 16 implementing organizations, including 2 U.S. federal agencies, subcontractors, and U.S. nongovernmental organizations. Several problems have delayed the effective implementation of both CH2M Hill projects. One problem was that CH2M Hill experienced problems filling critical staffing positions in Washington and Moscow and at the field office level. Although the core contract was awarded in September 1993, the regional director did not arrive in Moscow until February 1994. Other positions in Moscow funded in September 1994 delivery orders were still being filled as of January 1995. The contract to implement field support functions in Novokuznetsk and the Far East was awarded in April 1994, but on-site managers did not arrive until September and October 1994, respectively. The Far East project manager position was authorized in September 1994, but the manager did not move to Russia until February 1995. CH2M Hill officials had difficulty finding qualified staff who were willing to relocate to these areas because of the acute environmental problems and remote locations. USAID and CH2M Hill officials agreed that the on-site presence is essential for making progress. USAID/Moscow officials said staffing delays and delays in producing an acceptable work plan have hurt the credibility of the program in the region. CH2M Hill also had difficulty developing acceptable work plans that define how and when the scope of work will be implemented. CH2M Hill was required to submit the work plans for both the Novokuznetsk and the Far East projects within 60 days after signing the contract on September 30, 1994. USAID approved the work plan for the Far East project on May 8, 1995, but the work plan for the Novokuznetsk project was still being revised as of June 1, 1995. According to an USAID official, the work plans originally submitted were incomplete and lacked specific indicators or other factors necessary to evaluate the activities. Additionally, USAID officials said CH2M Hill had done a poor job of providing them with the appropriate reporting documents for these activities. USAID expressed concern over CH2M Hill’s failure to provide timely delivery of tracking materials, such as monthly summaries of financial status by project, monthly presentations of progress on select tasks, and weekly briefings on overall project progress. According to USAID officials, CH2M Hill addressed their concerns and has recently improved its reporting. As of February 1995, the CH2M Hill projects had contributed little to systemic reforms, and they will not generally be sustainable without outside funding support. This limited contribution is due largely to the vast environmental needs in Russia and the massive amounts of capital investment needed to modernize or purchase equipment for restructuring Russia’s environmental sector. Also, USAID and CH2M Hill officials said that Russian monitoring and enforcement procedures will be extremely difficult to change and are not addressed in these projects. Finally, the Ministry of Environmental Protection and Natural Resources was not involved in designing the project, thus reducing the likelihood the project could be duplicated on a wider scale. USAID officials said the project will attempt to address systemic reform through efforts to maintain and restock the forestry base. Some components of the Novokuznetsk project are likely to address environmental sector restructuring. CH2M Hill expects to work with Novokuznetsk’s industry, citizens, and local government to develop a strategic plan and provide recommendations for creating an environmentally safe city by 2010. However, these recommendations could require large capital investments. For example, CH2M Hill recently conducted industrial audits for two steel companies. After spending 6 weeks and using 7 U.S. advisers and 25 Russian counterparts to conduct the audits, company officials said the audits did not provide any new information on major pollution sources. Further, the companies do not have the funding to make the recommended improvements and will have difficulty obtaining it. According to one steel mill executive, the environmental audit allowed the mill to fulfill a condition for a World Bank loan. The Novokuznetsk project places a considerable emphasis on the contractor delivering studies and does not establish any indicators to measure progress in reducing actual pollution. Some components of the Far East project are designed to address the region’s need to maintain and restock its important forestry base. Efforts are planned to (1) strengthen polices and develop an adequate environmental regulatory structure, (2) create economic and political incentives to use resources efficiently, (3) increase the participation of nongovernmental agencies in environmental decision-making, (4) promote U.S.-Russian partnerships, (5) promote the export of timber products made by Russian workers, and (6) conserve biodiversity. USAID’s decision to use a core contract and delivery orders has caused delays and excessive paperwork reviews for both CH2M Hill and USAID staff. Under this system, USAID must prepare delivery orders and CH2M Hill must submit detailed work plans for each project component within 60 days. The decision to expand the Far East program has also delayed project design and implementation. The expansion covers a larger geographic region and greatly increased the scope of work, including the number of activities and subcontractors involved. The division of responsibility between USAID/Washington and USAID/Moscow has affected the agency’s ability to manage the project. USAID/Washington maintains overall management authority, but has given USAID/Moscow increased monitoring and program responsibility. However, USAID/Moscow officials said they still had minimal authority to manage the project or make changes. USAID/Washington must approve all program decisions, including minor ones, such as country clearances for visitors and all purchases exceeding $500. In April 1995, USAID/Moscow submitted an initial request, which remained under review as of June 1, 1995, for delegation of authority to the field. USAID has had difficulty monitoring the projects. USAID staff said they have not regularly visited the project sites because of the difficulty of traveling to the sites and the lack of adequate staff. The USAID/Moscow project officer keeps apprised of the project activities primarily by talking to project staff over the telephone or in informal meetings and by reviewing reports by the contractor or visiting teams. The district heating project is one component of USAID’s Energy Efficiency and Market Reform Project for the NIS. The project began in January 1992 and is considered the first economic development effort undertaken by the United States in the region. With $5.3 million in funding, the project was designed to improve district heating systems in six countries: Armenia, Belarus, Kazakstan, Kyrgyzstan, Russia, and Ukraine. Although the contractor, RCG/Haggler Bailly (RCG/HB), met most of its objectives, we found no indication that the project was having a significant impact on the sector. Most of the Russian work was concentrated in two Russian cities, Yekaterinburg and Kostroma, and the project was not completed in Yekaterinburg. Because USAID did not adequately monitor the project, it was unaware of (1) problems that prevented the completion of the project and (2) any long-term benefits, if any, to the beneficiaries. An evaluation conducted by a consultant did not identify obvious problems, and USAID did not address the recommendations in this evaluation. Fuel and energy are an important part of Russia’s economy. The subsidies provided by the former Soviet government to Russian energy consumers, both residential and industrial, created artificially low prices and promoted the inefficient use of highly polluting energy. Since the dissolution of the Soviet Union, Russia has implemented several policies, including increasing or freeing coal, oil, and gas prices, to reform its energy sector. Although still below world market levels, the cost of domestic oil and oil products in Russia doubled in 1993 and 1994. Such increases in energy prices have a significant influence on inflation and social conditions. As energy prices increase, consumers must find ways to use energy more efficiently. In February 1992, RCG/HB was awarded a contract for $550,000 to complete the project in Russia. The project was amended in August 1992, increasing the total cost to $1.3 million. The project had five objectives: (1) foster improved management of energy use in heating plants by identifying and implementing cost-effective “low cost-no cost” energy efficiency improvements; (2) transfer energy auditing and management techniques, including financial and economic analysis techniques; (3) provide equipment support to implement low-cost options, improve monitoring and energy management, and identify additional energy efficiency opportunities; (4) support the World Bank’s efforts to reform Russia’s energy pricing policies; and (5) promote the emergence of an energy efficiency industry in Russia. RCG/HB and USAID worked with representatives from the Russian Ministry of Fuel and Power, the Commission for Humanitarian and Technical Assistance of the Russian Federation, nongovernmental organizations concerned with energy efficiency and conservation, municipal governments, and industrial enterprises. The two primary Russian cities selected for the project were Yekaterinburg and Kostroma. In these cities, extensive energy audits were conducted of the district heating facilities, and three sites (i.e., hospitals, apartment buildings, and heating plants) in each city were selected as demonstration sites for U.S. energy efficiency equipment, including flow meters, temperature sensors, and thermostatic control valves. The value of the equipment supplied to the demonstration sites was approximately $172,000. The project sites were intended to demonstrate the savings in using no-cost or low-cost technologies and also serve to promote American-made equipment. In addition, RCG/HB conducted energy audit training seminars and provided energy audit equipment to technicians in Yekaterinburg, Kostroma, Irkutsk, Moscow, Murmansk, and St. Petersburg. To complete its work, RCG/HB contacted more than 250 U.S. equipment manufacturers to determine their interest in conducting business in Russia. The 12 companies that responded participated (at their own expense) in “wrap-up” seminars in four Russian cities when the project ended. The information obtained at these seminars was published in a lessons learned document. RCG/HB completed most of the objectives stipulated in its contract. The products delivered to complete the objectives included energy audits in two cities, energy audit training and distribution of energy audit equipment, a study of natural gas pricing in Russia, and an energy efficiency industry development effort. It also produced a video about the project that was shown on Russian television. RCG/HB was also required to identify, purchase, and install low-cost energy efficiency equipment manufactured by U.S. companies. RCG/HB purchased this equipment; however, due to problems with local conditions, some of the equipment was not installed in Yekaterinburg. An RCG/HB official said that in June 1993, a Russian subcontractor assured RCG/HB that it would install the equipment in Yekaterinburg by the end of 1993. We visited three sites in Yekaterinburg in February 1995 and found all the equipment at one site was still in shipping containers. Russian officials said the equipment at the other two sites only began operating in September 1994 and January 1995, respectively. According to an RCG/HB official, the company had not paid the subcontractor and would not pay until the installation was completed. However, USAID had already paid for the equipment, valued at $8,000. Officials in Yekaterinburg stated that the equipment had not been installed in 1993 for two reasons. First, in two cases, the sites (a hospital and an apartment building) were under construction and the construction plans had to be altered to accommodate the equipment. Second, at the other installation site (a district heating facility), the equipment had not been installed, and most likely will not be installed, because the proper Russian authorities had not certified it. Officials in Yekaterinburg stated that it would be illegal to install and operate the equipment before it was certified. They explained that although the equipment can be used for demonstration purposes at consumer locations (e.g., apartment buildings), a public utility cannot use the equipment and the information (e.g., energy consumption data) it produces as a basis for charging customers. Similar equipment was installed in Kostroma, according to USAID and RCG/HB officials, even though it had not been certified. USAID officials told us that city officials were willing to install the equipment because they realized the potential benefits. We found no indication that the project had contributed to systemic reform in the area of energy efficiency. Most of the work was concentrated in two cities, and the project was not completed in either city. In addition, USAID did not adequately monitor the project and could not be certain of any long-term benefits. USAID used an independent consultant, Management Systems International, to evaluate the NIS district heating project, including RCG/HB’s work in Russia. The evaluation, published in July 1993, reported no serious problems and declared the project a success. Specifically, the study indicated that total equipment costs for the four cities in Russia amounted to $418,000 and would produce an annual savings of $1.4 million. It also noted that the equipment would reduce pollution. Furthermore, as a result of the energy efficiency industry development effort, 12 U.S. companies had sent representatives to the various countries to participate in seminars held at the end of the project. We found that the consultant’s evaluation was deficient. The evaluation did not mention the equipment installation problems in Yekaterinburg or the need to have foreign equipment certified by the Russian government. Instead, the evaluation stated that “by April 1993, all of the equipment was installed and operating.” In addition, USAID did not specifically direct Management Systems International to assess the products RCG/HB was required to produce, such as the natural gas pricing study for Russia or the lessons learned from the energy efficiency industry development effort. The evaluation did not discuss the quality of either of these products. USAID officials stated that the natural gas pricing study had been completed in a collaborative effort with the World Bank, which used it in its work pertaining to loans made to Russia’s natural gas sector. However, the consultant’s report contradicted USAID’s statement by noting that the World Bank did not make a serious attempt to involve RCG/HB in its work in Russia. Finally, the evaluation did not discuss the training seminars conducted by RCG/HB in Irkutsk, Moscow, Murmansk, and St. Petersburg, or the energy audit kit instrumentation supplied to technicians in these cities. The continued use of these deliverables is an important factor to consider when evaluating the success and sustained benefits of this project. USAID officials were not aware of the problems we identified in Yekaterinburg nor the shortcomings of the evaluation. They stated that in June 1993, an official from USAID/Washington visited all the NIS sites except Yekaterinburg. They stated that equipment had been installed at the sites visited. A local national employee from the USAID mission in Moscow also visited Yekaterinburg in June 1993 but did not report any problems at that site. USAID officials discovered the problems we found when they accompanied us to Yekaterinburg. USAID said it would take corrective action. Also, USAID has no mechanism to monitor various outcomes of the project, including (1) the success of U.S. industry in entering the NIS market, (2) policy reforms written into law, and (3) the rate of adoption of new technologies. Although USAID said that the installed equipment would produce annual savings at project sites, it did not record these savings during the 1993 or 1994 heating seasons. Furthermore, USAID had not determined the savings generated by either the energy audit kits provided to technicians in six cities or the energy audits conducted in Yekaterinburg or Kostroma. USAID initiated the NIS Exchange and Training program in the spring of 1993 to train NIS leaders about free-market economies and democratic governance. USAID hoped that training the participants in the United States would provide them with the technical skills and attitudes required to create similar policies, programs, and institutions in their own nations. We reviewed the health care training provided to Russians in late 1993. Our analysis indicated that the health care training had little likelihood of contributing to systemic reform and that USAID now considers the training to be irrelevant after Russia changed its direction for health care reform. The training’s primary objective—to facilitate Russia’s transformation to a democratic free-market system—was unrealistic for a 2-week training course. USAID did not follow up with participants to determine the training’s impact on systemic reforms. Although USAID officials said that most participants have been involved in follow-on projects, only 25 percent are slated for follow-on activities planned in 1995 and 1996. According to USAID, Russia’s health care industry has a number of problems. These problems include the virtual collapse of the pharmaceutical and medical supply industry, poor quality of care due to training and technical gaps, serious funding shortfalls, and a centralized system devoid of incentives for efficiency and cost control. Although Russian policies have produced an educated workforce with more doctors per capita than the United States, the workforce lacks many of the basic skills and institutions necessary to function in a democratic, free-market context. USAID contracted in June 1993 with its worldwide training support contractor, Partners for International Education and Training (PIET), to conduct training in the United States for 200 NIS leaders and professionals at a cost of $2.6 million. The training objectives were to facilitate the region’s rapid and sustainable transformation from authoritarian, centrally controlled regimes to pluralistic, democratic countries with free-market economies; provide participants with new skills and knowledge to contribute to economic and social development; promote the value of democratic decision-making; provide an understanding of U.S. programs; and lead to long-term relationships with U.S. institutions. USAID also hoped that participants would share their new skills and perceptions with their counterparts. Under the PIET contract, USAID missions identified the training topics and selected the participants. USAID/Moscow selected the participants based on their positions in oblast’ health care systems and their planned inclusion in follow-on projects. According to USAID, participants went to the United States before participating in follow-on projects so they would be more receptive to reforms. The training project encouraged missions to link training, if appropriate, to ongoing or planned developmental assistance by USAID and others. After course topics and participants were selected, PIET was expected to arrange training courses in the United States and provide administrative and logistical support for international travel, living expenses, medical insurance, tuition, books, and other needs. PIET was also expected to (1) ensure that training programs at U.S. training institutions were functioning properly, (2) monitor the participants’ progress, (3) provide USAID status reports, and (4) evaluate each training program. PIET subcontracted with Management Sciences for Health to provide the training in the United States; 42 Russians were trained in health finance and 20 were trained in pharmaceutical management. USAID subsequently contracted with the Academy for Educational Development to make training arrangements. PIET met its contractual requirements by providing training, transportation, and logistics, according to USAID officials. The participants we spoke with in Russia praised PIET’s support and assistance as well as the quality of the training they received in the United States. Our review of sample course assessments showed that other participants generally gave high marks to the training. For example, in the evaluation conducted by the USAID mission, most participants were satisfied with the course and believed it was applicable to their work conditions. PIET also met its monitoring and reporting requirements. PIET maintained contact with the training institutions, called a random sample of participants once a week, contacted the trainers on an as needed basis, and helped participants with general adjustment problems. PIET also provided all the required reports, including regular status reports and course assessments. USAID/Washington officials were satisfied with the quality of PIET’s support and monitoring during the training. USAID was unable to provide any evidence that the training will help Russia’s democratic or economic transformation. Although the training may have met some secondary goals, without follow-on activities, fulfilling these objectives will not likely result in systemic reform. USAID/Washington officials agreed that the training could not meet all of the contract’s objectives. They said that a 2-week training course could only “facilitate” reaching these objectives but not actually attain them. Further, they provided the training quickly as a political imperative to respond to the opening in the NIS, and they recognized training alone has limited usefulness. USAID was unable to substantiate that any of the 62 participants contributed to any reforms, partly because the participants lacked the authority, expertise, or resources to influence reforms. However, the participants who had taken PIET health-related courses said the training helped them understand U.S. programs and they had shared their training with others. USAID officials in Moscow and Washington said that training alone would not influence systemic change and that subsequent training was better integrated into follow-on activities. They said that the main purpose of the PIET training was to make participants receptive to follow-on reform projects, which USAID thought would occur. However, USAID later dropped plans for follow-on activities in Central Russia because the oblasts were not reform-minded and the contractor reported that only 25 percent of the Siberian participants would participate in follow-on activities. The Russians did not see health reform as a priority when the early training took place, and Russia has only recently begun to consider the direction of reforms, according to USAID/Washington officials. Further, this early training is now irrelevant because it was based on Russian policy directions that were later discarded as unworkable. The mission was forced to move much more quickly than it desired because it was under extreme congressional pressure to quickly establish the training program, according to USAID officials. As a result, the health care training was initiated before it could be integrated into follow-on projects more likely to facilitate systemic change. Further, because the Russians were unclear about what reforms they wanted, the mission had trouble targeting the training. The Russian officials began exploring reform options with the mission in December 1994. USAID/Moscow officials assessed the training after participants returned to Moscow; however, they have had no contact with the participants since then. They did not know which participants, if any, would be involved in any of the follow-on activities planned in Siberia. The International Business & Technical Consultants, Inc. (IBTCI) project did not achieve its goal to increase the availability of commercial real estate to small enterprises, although it did provide potentially useful technical assistance in three cities. The project did not contribute to systemic reform and was not sustainable. IBTCI did not replicate the pilot project—a project objective—in part because the roll-out cities were poorly chosen. IBTCI was responsible for choosing appropriate cities, but its short-term consultants lacked sufficient knowledge of Russia and local conditions to determine what cities would have cooperative officials and could benefit from the project. Much of Russia’s commercial real estate is still owned by the government. Rather than divesting its ownership rights, the central government has decentralized those rights to local governments, both regional and municipal. Although this practice is quite common among other countries in transition, Russia is different because local governments (1) have a virtual monopoly on commercial real estate and (2) have not moved toward commercial real estate leasing using market mechanisms. Highly inefficient users occupy valuable commercial space, contributing very little to local budgets, while private sector development is blocked by the unavailability of property. USAID and GKI recognized this problem and signed a task order with IBTCI to develop a solution. The $2-million task order for the rapid diagnosis pilot project and roll-out project was part of IBTCI’s $13.3-million omnibus contract. The initial deadline of May 1994 was extended to December 1994, but without any increase in the cost or level of effort required. The general purpose of the task order was to significantly increase the availability of commercial property in Russian cities. The specific goals of the task order were to examine the causes of limited access to retail space, implement a pilot project in a selected city, and then replicate the pilot project in other oblasts. IBTCI was to (1) deliver a report on the root causes of and solutions to the problem of commercial property access for one city; (2) design an implementation plan to address these issues, including procedural, legal, administrative, financial, policy, and other measures; (3) replicate the pilot project in at least five other oblasts; and (4) produce and nationally distribute publicity and instructional materials for local state property committees, local authorities, and entrepreneurs on how to increase the availability of retail property. IBTCI used a subcontractor, Boston Consulting Group (BCG), to perform the rapid diagnosis phase and conduct the pilot project in the City of Perm’.The goals of the pilot project were to (1) design and test a method for increasing the amount of commercial real estate available to small and start-up businesses and (2) identify any constraints or impediments that might exist. The pilot was intended to serve as a model for instituting the program in five other Russian cities. IBTCI instituted the roll-out in Irkutsk, Tver’, Novgorod, Yekaterinburg, and Vladivostok. Because it had previously worked in Perm’, BCG used staff who already had a relationship with municipal officials when it began the diagnosis and pilot phases of the project. In contrast, IBTCI relied on consultants who made short visits to the other cities to research, plan, and implement the roll-out. During the rapid diagnosis phase in Perm’, BCG identified three feasible ways of improving access to commercial space: convert residential premises to commercial use, develop a secondary real estate market, and optimize the leasing process. BCG, IBTCI, USAID, and GKI selected the leasing optimization method because they thought some concrete results were possible during the study period, even though it was predicted that this alternative would have limited support and low potential impact. Lease optimization means, among other things, (1) moving toward market-determined rents, (2) removing bureaucratic discretion in space allocation, and (3) creating incentives to sublease unused space. BCG conducted the rapid diagnosis and pilot phase of the project from November 1993 to March 1994 in Perm’. BCG devised a two-track auction system for making municipality-controlled real estate available to private businesses. The first track was an auction for the right to lease specific commercial real estate properties (i.e., a one-time premium). The second track was an auction for the rental rate at which a property would be leased. The purpose of this system was to introduce market mechanisms into the allocation and pricing of commercial real estate. Under this system, bidding for the right to lease and the rent to be paid replaced government bureaucrats with market mechanisms. The results of the first auction, which occurred on March 1, 1994, were not promising. In the first track auction, three properties were available. The right to lease them was sold for each property. In the second track auction, 15 properties, all basements, were available. Bids were made on only 3 of the 15, and each received only one bid. The rental rate for the three properties did not exceed the rent that started the bidding. The results of a second round of auctions, which occurred in May 1994, were also disappointing. In the first track auction, 10 properties were available, but the right to lease was sold for only 4 properties, although several parties bid on them. In the second track auction, three properties were available, but only one received a bid, and that was the starting bid. These results were not perceived to have significantly increased the amount of commercial real estate available in Perm’. IBTCI started work on the roll-out in mid-February, before the Perm’ pilot was completed or its results evaluated. IBTCI soon found that none of the five cities chosen for the roll-out had conditions that approximated, let alone duplicated, those in Perm’. The roll-out cities seem to have been chosen more for their geographic and population distributions than for any existing economic, political, and regulatory conditions that might make the Perm’ model replicable. Because of these differences, IBTCI had to deviate from the Perm’ model and basically develop five new pilot projects; nonetheless, it still experienced problems. Irkutsk officials were not cooperative with IBTCI and declared the information needed to assess the commercial real estate situation a state secret. Local officials were not ready to participate in the project. In Tver’, an auction system had been functioning between June 1992 and December 1993. The original investment tender process used in the auction was challenged in court and hopelessly compromised. IBTCI introduced a new tender process in Tver’, but the new system’s effectiveness has not yet been demonstrated. The Novgorod officials opposed conducting right-to-lease auctions because they feared losing future revenue and the city had experienced poor results from a similar auction in November 1993. IBTCI focused on establishing a market for municipal, oblast’, and private commercial space by creating a real estate listing center, developing a secondary market, and encouraging officials to allow increased and legalized subletting. The listing center’s effectiveness has not yet been demonstrated. In Yekaterinburg, an effective auction system has been in place since 1992. City officials were not interested in IBTCI’s original task of increasing the use of commercial leases. Instead, they wanted assistance in how to use retail and commercial space efficiently and increase the city’s revenues from property leases. Although IBTCI submitted some analyses and recommendations addressing their concerns, city officials told us that IBTCI came to town on different occasions, spent little time there, did not speak to the appropriate local officials, and presented an academic report that was of little use to them. Vladivostok city officials were interested in privatizing commercial real estate, but were unable to devise a method that would use mortgages to provide substantial revenue for the city. IBTCI devised a mortgage instrument that allowed the city to continue receiving income by holding the mortgages and allowed small business owners to bid for a property and provide as little as 5 percent of the final cost as a down payment. The city auctioned one property in August 1994 for under $7,000, but local officials doubted they would use an auction again because the city did not not have any more excess property. By the time IBTCI had completed its work in the five cities, none of the cities had participated in any activities that remotely resembled the Perm’ model. As a result, the objective of replicating the pilot project in other cities was not achieved. There were various reasons for IBTCI’s inability to replicate the Perm’ model. First, tensions between IBTCI and BCG caused some problems. BCG performed both the rapid diagnosis and the pilot phases, but IBTCI determined that BCG’s approach was not adequate. Russian officials monitoring the project were aware of tensions between IBTCI and BCG early in the project, but IBTCI was obligated to fulfill the contract and replicate the model. The tensions between IBTCI and BCG resulted in little continuity of personnel from the pilot to the roll-out phases. Second, a provision of Russia’s 1994 State Privatization Program Act and its implementing regulations caused problems in the Perm’ pilot. The provision gave lessees who obtained their leases competitively (i.e., at an auction) the right to buy the property at the end of the leases. The implementing regulation set an extremely low selling price for such privatized properties. IBTCI said that the act’s provision and the implementing regulation stopped the Perm’ model because city officials did not want to lose revenue from leases and did not want to be forced to sell leased property for extremely low prices. Even though reports identified the problem as early as January 1994, USAID, GKI, and the Russian Privatization Center took no effective action to address the issue. Third, although officials at the federal level agreed earlier that the project should be done and that the Perm’ model was viable, local officials in the roll-out cities did not agree with the Perm’ model or its usefulness in their cities. Fourth, although the consultants used by IBTCI for this project had some experience in Russia and some spoke Russian, Russian authorities questioned the level of some IBTCI consultants’ professional experience. In addition, the consultants did not have enough knowledge of the Russian localities and local politics to choose roll-out cities well. IBTCI staff did not reside in the cities during the roll-out. Instead, they would fly in, do a few days work, then leave. Thus, they were unable to identify what cities would be the best candidates for replicating the Perm’ model. Even BCG had problems carrying out a successful pilot, despite its knowledge of and relationships in Perm’. The project was not sustainable and did not contribute to systemic reform. Although IBTCI’s final report provided solutions to specific problems, the project did not implement the pilot or develop a method that could be replicated in other cities. USAID officials and the Russians who were in charge of disseminating the report did not know whether or where the IBTCI “solutions” had been applied in any but the six cities. City officials we interviewed in Yekaterinburg and Vladivostok were not using the concepts of the project. GKI and the Russian Privatization Center had originally proposed the project, which suppported a federal initiative. However, an existing GKI act and its implementing regulation potentially forced local governments to sell leased property at low prices to anyone who bought the lease at an auction. This regulation contributed to the poor results of the project. USAID managed this project—from Washington with limited help from USAID/Moscow—jointly with GKI and the Russian Privatization Center; it relied on consultants from the Harvard Institute for International Development and the Center to help monitor the project. Nonetheless, USAID did not monitor the project adequately. Even though IBTCI filed the required reports, these reports failed to describe how much the roll-out deviated from the Perm’ model. Center officials said they first became aware that the pilot was not being implemented in other cities in late May 1994, long after the roll-out could be redirected to other cities. When problems were known in some cases, USAID did not take any corrective action. For example, a Harvard consultant who visited some sites raised questions about the cities selected for the roll-out, but USAID took no corrective action. Similarly, the problem with the State Privatization Program Act and its implementing regulations was mentioned in reports in January 1994, but no action was taken to resolve it. Finally, USAID officials said they were aware of the tensions between BCG and IBTCI, but simply told IBTCI to work the problem out themselves. USAID/Moscow officials said they did not have enough staff to intervene when problems arose, visit the project sites, and talk with beneficiaries about how the project was progressing. The lack of quantifiable objectives or time frames in the Tri Valley Growers’ (TVG) project design makes it difficult to measure the project’s success. TVG helped to facilitate the work of two agribusiness partnerships in Russia; nevertheless, USAID concluded that TVG did not perform adequately. It is too early to determine the long-term economic viability of the partnerships; however, the involvement of U.S. companies increases the likelihood the partnerships will be maintained. The partnerships will probably not have any measurable effect on Russia’s agricultural sector because of their limited size and number. Agriculture plays an important role in the Russian economy. Although estimates vary, Russia has approximately 27,000 large state and collective farms, which cultivate approximately 90 percent of Russia’s arable land. Approximately 270,000 private farms cultivate 5 percent of the arable land. The remaining 5 percent is made up of private garden plots. The total farming population comprises about 26 percent of the country’s population. Subsidies and income transfers to the agricultural sector represent 25 percent of Russia’s public expenditures. Some of these subsides could be expected to be eliminated if the agricultural sector were privatized. Russian agriculture is a low-productivity sector. For example, milk cows and potato and grain crops yield about half of western levels, and labor productivity is probably as low as one-tenth. In addition to low productivity, Russia has been plagued by losses of up to 50 percent in its storage and handling systems. Finally, Russia’s food processing system suffers from poor management and a lack of quality produce, additives, ingredients, and packaging materials. Although the Russian government has begun reforming the agricultural sector, the actual transformation of farms and agribusiness enterprises into market-oriented, productive entities is moving slowly. In 1992, it reorganized state and collective farms and agricultural input and output distribution enterprises into joint stock companies. However, most farms have not altered their operations to increase productivity and competitiveness. In August 1992, USAID developed the agribusiness partnerships project as the cornerstone of its Food System Restructuring Project. The agribusiness partnerships project was designed to create efficient systems for providing inputs to agriculture and for processing and distributing agricultural products. USAID intended to catalyze NIS private sector activity by facilitating the involvement of private U.S. agribusinesses. Between January and May 1993, USAID signed cooperative agreements with three agribusiness cooperatives: Citizens Network for Foreign Affairs (CNFA), TVG, and Agricultural Cooperative Development International. We reviewed USAID’s cooperative agreement with TVG, which had obligated $5.2 million for the region. To achieve the project’s objective, TVG was to facilitate partnerships between American and NIS private agribusiness-related enterprises. However, the agreement did not specify the number of partnerships or the related time frames. TVG established an office in Moscow staffed by one American director and three Russian nationals. This office was supported by several TVG headquarters staff in California. The American director did not have an agribusiness background but was responsible for managing the office, identifying potential Russian and American agribusiness partners, reviewing partnership proposals, and submitting the proposals to USAID for final subgrant approval. According to TVG officials, TVG identified potential Russian agribusiness partners through a network of contacts at the Ministry of Agriculture, Association of Individual Farms and Agricultural Cooperatives of Russia, World Bank, European Bank for Reconstruction and Development, Peace Corps, investment funds, and regional and local administrations. To identify American partners, TVG canvassed its members in the United States, advertised in agricultural publications, contacted agribusinesses via telephone, and looked for firms already operating in Russia. Once identified, TVG worked with the American and Russian partners to develop proposals for USAID’s approval. After receiving USAID approval, TVG awarded subgrants to U.S. agribusinesses working in Russia primarily to provide technical assistance and agricultural training to help create efficient food systems. The American agribusiness partner was required to provide at least 2.5 times the level of funding provided by USAID, to ensure its commitment to the partnership and the long-term economic viability and sustainability of the joint venture. The items purchased with the USAID subgrants are referred to as “additionality” components, or those components that might otherwise not be included in the joint venture without USAID funding. Additionality components include additional training and facility expansion. TVG established six partnerships in five NIS countries, with two in Russia. As of March 1995, one additional partnership in Russia was awaiting USAID approval. The first partnership established in Russia was with Petoseed Company, Inc., and is located in Krasnodar. Petoseed produces vegetable seeds that will be sold in the NIS and internationally. During the 1994 growing season, Petoseed produced 11,000 pounds of seed in Russia. The second TVG Russian partnership involves CTC Foods Company, which is building a potato processing facility in Pushchino. If finished, the facility will produce dried potato flakes that will be sold primarily to hospitals and schools. The two American agribusiness partners exceeded the required level of partnership funding in both partnerships. Contributions by USAID, U.S. agribusiness partners, and Russian beneficiaries to the TVG partnerships in Russia are shown in table III.1. TVG had difficulty identifying partnerships. TVG staff had difficulty beginning work in Russia because of poor telecommunication and office facilities, the chaotic Russian business environment, the limited number of American firms willing to invest in Russia, limited funding, and a small staff. According to a TVG official, Petoseed and CTC Foods contacted TVG to participate in the project. However, both companies were already working in Russia before USAID had established the agribusiness project and located Russian partners on their own. He said they would have invested in Russia without USAID involvement. An official at the Association of Individual Farms and Agricultural Cooperatives of Russia told us that the Association tried to work with TVG to identify Russian partners but received only “empty promises.” Although USAID never specified the number of partnerships that it wanted to establish within a given time frame, it concluded that TVG had not performed adequately. Between May and December 1993, USAID expressed concern about the number of partnership proposals TVG was submitting and the quality of the proposals. A February 1995 USAID review of the agribusiness project stated that TVG required more support by USAID staff and was less committed to the project than CNFA. TVG closed its office in Moscow in August 1994 and has stopped the Russia part of its program because USAID terminated the agribusiness partnerships project in Russia. Nevertheless, USAID’s review noted that the partnerships to which TVG had made subgrants were doing well. However, a TVG official told us in May 1995 that because of financial problems, CTC Foods may not be able to continue its work in Russia. Consequently, the processing facility in Pushchino may never be constructed. According to a USAID official, TVG’s Moldova office now monitors the Russian subgrants. The agribusiness partnerships developed by TVG in Russia have not been operating long enough to adequately judge their impact. However, due to their limited scope, it is unlikely that the partnerships will have a significant effect on reforming Russia’s agricultural sector. USAID/Washington designed the agribusiness partnerships project in 1992, before the USAID/Moscow mission was opened. USAID/Washington and USAID/Moscow split the oversight responsibilities: Washington was primarily responsible for TVG’s compliance with the cooperative agreement and Moscow was responsible for subgrant proposal evaluation. Final grant approval was a joint Moscow/Washington effort. Although USAID/Moscow raised continued concerns about the agribusiness partnerships project’s ability to influence systemic reform, the project proceeded. USAID/Moscow officials called for a review of the project as early as November 1993, and they developed a statement of work for an evaluation team. However, USAID/Washington told USAID/Moscow to “forget the assessment and get on with the job.” Consequently, no assessment was conducted. According to USAID officials, an evaluation is planned for June 1995. According to USAID officials, the agency wanted to implement the project quickly and demonstrate results. TVG’s Moscow director stated she was pressured to submit proposals quickly because USAID was being pressured by Congress. However, both CNFA and TVG officials complained that USAID’s subgrant approval process was arduous and lengthy. They said it took several months for USAID to accept or reject a proposal and added that USAID/Washington caused most of the delay. USAID/Washington officials said the delays were caused by the time required to research legal issues, conduct environmental audits, and work through the Washington bureaucracy. The cooperative agreement with TVG called for quarterly program performance reports and annual progress reports. An independent accounting firm was to audit TVG’s financial statements. Although USAID officials said that TVG met all of its reporting requirements, our review indicated that TVG had not submitted annual reports. According to USAID officials responsible for the project in Russia, USAID staff visited only half the project sites established by TVG, CNFA, and Agricultural Cooperative Development International between May 1993 and November 1994. USAID was required to annually assess the performance and program direction of the cooperative agreement and contract for an independent external evaluation. As of March 1995, it had done neither. However, CNFA completed an evaluation of the agribusiness partnerships project in August 1994 at USAID/Moscow’s request. It reported that the project had not started agribusiness partnerships quickly, had not made a significant contribution to sectoral reform, and had little to show for the “additionality” purchased with USAID funds. CNFA’s internal evaluation did not address TVG’s performance. USAID completed an internal review in February 1995, but the review did not cover the additionality components. The review stated that it was unrealistic to expect the overall project to have a significant, measurable impact on the food system in the NIS. USAID has discontinued the agribusiness partnerships project in Russia and, as of September 1994, stopped accepting proposals for Russian agribusiness partnerships. In addition, USAID has decided not to obligate any additional funds for the project. Agency officials stated that the project itself cannot adequately address the obstacles of reforming the agricultural sector and indicated that other projects, such as the Russian-American Enterprise Fund, were better vehicles for financing joint ventures. The Russian officer resettlement pilot project has been successful in providing the required housing units, although not within the original time frames. The project’s secondary objectives—to provide job skills training for demobilized officers and to help facilitate housing sector reform—were only partially met. By implementing a pilot program, USAID was able to test the viability of a housing construction project and apply lessons learned to the $160-million follow-on project. Planning and Development Collaborative International (PADCO), the contractor tasked to provide construction management services, was successful in part because it (1) had experience in working on housing sector reform in Russia, (2) established a physical presence in Moscow and in the field, (3) obtained at least some buy-in and involvement from the local Russian government, and (4) employed Russian staff to oversee construction activities. The Russian Ministry of Defense has traditionally provided qualifying retired and demobilized military officers with a dwelling unit or plot of land and some job skills training. After the Soviet Union dissolved, between 200,000 and 350,000 officers needed housing; approximately 42,000 were located in the Baltic Republics of Estonia, Latvia, and Lithuania. However, since the dissolution, the Russian government has lacked the housing stock to resettle all the demobilized military officers. Further, Russia’s severe economic problems, housing shortages, and lack of “social guarantees” for these retired officers has delayed troop withdrawals. President Clinton announced the Russian officer resettlement program at the Vancouver Summit in April 1993. Later, in July 1993, he stated at the G-7 Heads of State meeting in Tokyo that the program should encourage rapid withdrawal of demobilized Russian officers from the Baltic Republics. The Russian Officer Resettlement Initiative is being conducted in two phases—a $6-million pilot and a $160-million follow-on project. The pilot’s primary objective was to construct 450 housing units by July 1994 for the resettlement of demobilized Russian military officers. The follow-on project was to provide up to 5,000 units (2,500 constructed and 2,500 voucher certificates) by November 30, 1996, for officers demobilized in the Baltics or other countries outside Russia. The pilot project’s secondary objectives were to provide job skills training, experiment with new housing technologies, assist private firms in housing development and construction, and expand the scope of housing choices. To implement the pilot project, USAID contracted with PADCO for construction management services. It also awarded fixed-price contracts to five Russian builders and one private voluntary agency to construct housing units in five cities. Finally, it provided a grant to the International Catholic Migration Commission for training. According to project officials, PADCO assisted the project design team that included officials from USAID/Washington and USAID/Moscow. This team visited potential project sites, evaluated projects, and negotiated construction contracts. PADCO was responsible for managing the construction activities and monitoring contractor performance. U.S. officials said the design team created the pilot with only minimal input from the Ministry of Defense or the Ministry of Construction. USAID officials added that the design team conducted its own field assessment to select participating cities and worked almost exclusively with the local authorities in these cities. The local authorities were to provide infrastructure services such as heating, water, and road access for the housing units. USAID relied on the Russian Ministry of Defense to select the officers to receive the housing. The initial design for the pilot program did not stipulate where the officers should come from, but as a result of the Tokyo G-7 meetings in July 1993, USAID established criteria that gave priority to demobilized officers living in the Baltics. The criteria also included housing for officers from other areas outside Russia because two cities were reluctant to provide infrastructure for officers exclusively from the Baltics. Under this criteria, officers demobilized in other areas would be included. USAID’s compromise with these cities allowed some demobilized officers from their own jurisdictions to receive housing. In Nizhniy Novgorod, half the officers could come from its jurisdiction, while in Volgograd, 40 percent of the officers could come from its jurisdiction. USAID and PADCO officials said beneficiary composition would also be an issue in the follow-on project. PADCO’s project staff established a long-term presence in Moscow and traveled regularly to the various building sites. It also hired and trained Russian construction specialists to supervise the construction in each city. USAID officials said PADCO’s experience in Russian housing issues helped facilitate this project. PADCO and Russian contractors generally met the program’s primary objective of providing housing units, although not within established time frames. As of July 1994, only 94 (21 percent) of the 452 units were completed, although as of February 15, 1995, the project had provided 422 units (93 percent) through a combination of construction and voucher certificate activities. (See table III.2 and figs. III.1 and III.2.) Of the 10 project sites in 5 cities, USAID terminated 3: one because newly elected local officials refused to meet the previous administration’s commitments to provide infrastructure support to the housing units and 2 because contractors defaulted on their building commitments. In Novosibirsk, USAID and PADCO officials said federal and oblast’ officials were not involved in the initial agreements. Therefore, they had no authority to require the local administration to abide by the contract, and they would not allocate additional funds for the infrastructure. In Lipetsk, the contractor was a private voluntary agency that subcontracted with a local Russian construction firm to execute the work. When the subcontractor defaulted, the agency was unable to find a replacement to complete the work. At the Nizhniy Novgorod 50-unit project site, project officials said the Russian contractor ran into financial problems and stopped work, claiming that the $8,500 per unit allowed in the contract was not enough to cover costs. Although the city offered several incentives, including a $300,000 letter of credit and land for additional construction, USAID and PADCO officials said the contractor was unwilling to spend his own funds and the contract was terminated by USAID. USAID officials said the contractor at the 128-unit Nizhniy Novgorod site was concerned that $8,500 per unit was not enough to cover the cost of construction. The contractor had completed almost 70 percent of construction when increased construction costs, caused by rapidly rising inflation (9 percent a month) and the devaluation of the ruble, forced him to stop work. According to USAID officials, because the contractor had done a good job, used his own funds from other projects, and was well-connected with city and oblast’ officials, he negotiated an agreement so that the oblast’ and USAID would cover the increased costs of the 128 units. To ensure the project’s completion, USAID and the oblast’ administration each provided an additional $700,000, thus increasing the per unit price to $19,500. Because contracts were terminated months after they were awarded, USAID developed a method to meet the housing requirements quickly. It awarded a contract to the Urban Institute to implement a voucher certificate program, which allowed officers to purchase existing local housing in a participating area or housing under construction. Because of increased construction costs, the inclusion of land, and infrastructure costs, the vouchers were increased from the $8,500 per unit in the construction program to a maximum of $25,000 per unit. According to USAID officials, using voucher certificates allowed the pilot program to provide housing units much quicker than through direct construction. As of January 30, 1995, 80 vouchers had been disseminated to the officers, and 76 (95 percent) of the them had been used to purchase units, which were turned over to the officers. The International Catholic Migration Commission’s efforts to address one of the project’s secondary goals of job skills training has shown limited results, according to USAID housing officials. As of December 1994, it had arranged training for 46 beneficiaries who attended business courses in Pskov, Novgorod, and Volgograd. The USAID official said construction delays and the subsequent delays in officers relocating to their respective cities affected start-up activities. Further, the official said the Commission did not adequately identify the officers’ training needs and failed to recognize that many of them were not interested in training. Project officials said only minimal progress was made in addressing other secondary goals, such as demonstrating new housing technologies, expanding customer choices, and implementing more stringent quality control standards. For example, in Tula, contractors constructed 14-, 16-, and 30-unit duplexes, which took as long or longer to build than traditional high rise structures. (See fig. III.2.) PADCO field representatives worked with local builders to ensure that quality control measures were introduced and achieved. The officer resettlement pilot project accomplished its objective of providing housing to demobilized officers. The project was not designed to address systemic reform or to be sustainable, and it did not do so. PADCO officials said the attempts to sustain the effects of the project’s secondary objectives were short-lived, although the lessons imparted by PADCO—new housing technologies, housing choices, and quality control measures— may have some positive effect on the building industry and contractors. USAID and Urban Institute officials said the lessons learned from implementing the voucher certificate activity by the banks, realtors, and local governments may be used to facilitate the local governments’ transition to a private housing market. As a result of the pilot project, USAID incorporated the lessons learned as it designed the $160-million follow-on initiative to provide 5,000 units to officers from the Baltics. The primary changes included (1) obtaining total support, involvement, and buy-in from all three levels of Russian government; (2) using the voucher certificate program to expedite the relocation of 2,500 officers; (3) stipulating that a maximum of 10 percent of the demobilized officers could come from local jurisdictions; (4) using a U.S. construction management firm as the prime construction contractor and subcontract to the individual builders; (5) using only experienced, well-connected Russian builders; (6) selecting partially completed buildings and sites with existing infrastructure; (7) using a traditional Russian housing design; and (8) providing a more realistic per unit cost ($25,000 versus $8,500). According to USAID officials, these changes are expected to allow the follow-on project to proceed more quickly and efficiently than the pilot. USAID/Moscow had management responsibility for the project and generally did a good job of managing, monitoring, and overseeing it and coordinating with USAID/Washington. PADCO officials said USAID/Moscow actively assisted the contractors in reaching acceptable compromises with government officials and contractors. The USAID/Moscow project team reviewed project status reports, visited project sites, and held regular meetings with contractors. Finally, AID terminated work when problems could not be overcome. The following are GAO’s comments on USAID’s letter dated June 1, 1995. 1. We have incorporated these comments into the report where appropriate. 2. Although we noted project shortcomings, we also recognized the contribution Deloitte & Touche made toward the privatization process and considered the project a success. Moreover, we recognized USAID’s positive contribution to the overall privatization effort. 3. We conducted a detailed review of Tri Valley Growers’ performance, one of the three contractors responsible for implementing the agribusiness partnerships project, to determine whether this expenditure of funds had any sustainable impact. We concluded that it did not. Although we did not draw any conclusions about the agribusiness partnerships project as a whole, our analysis casts doubt on whether the project can have a systemic impact if the individual partnerships are not having an impact. (See comments 29 and 30 for additional discussion.) 4. It is too early to know whether USAID’s prediction concerning the outcome of ongoing activities in the energy sector will result in significant sector reform. Many of these projects are just starting and must overcome many obstacles. For example, in our September 1994 report on nuclear safety, we reported that there are no guarantees that the international assistance effort will result in safer reactors or expedite the closure of the riskiest reactors. In fact, in the absence of a commitment to close down the reactors, the assistance may encourage their continued operation. We noted that donor countries face formidable challenges in promoting the closure of the Soviet-designed reactors because the countries operating them depend on the nuclear power to meet their needs for domestic energy and export income. 5. We agree that the new evaluation system is promising in that it should provide an improved basis to evaluate USAID’s programs in the NIS. However, since the first report is not due until November 1995, it is too early to know whether the system will fulfill its promise. The value of the system will depend on the indicators selected, the reliability of the data, and the subjective judgments of USAID officials preparing the reports. For the system to have credibility, USAID will have to be able to identify shortcomings as well as successes. 6. We have modified the report to reflect this information. 7. We were able to reconcile obligations and expenditures in the USAID financial report with other USAID documentation. Accordingly, we have deleted the examples from the report. 8. Our draft report included information on the work of the Consuls General. We have modified our report to update the information on increased site visits. 9. Although market forces played a role in the limited use of some of the centers, the lack of local support as well as other factors also caused the low activity levels at some centers. More importantly, it is questionable whether USAID should spend funds on activities without a market unless it has a strategy to create demand for the product it is financing. 10. We visited only one site (in Siberia during February) because of the limited amount of time we had in country. Vorkuta can be reached by plane in the winter. USAID can visit the sites at other times during the year. We believe that three site visits—two occurred after we began our audit—in 31 months is inadequate for monitoring purposes. Day-to-day contacts with PIER staff are important; however, they do not substitute for site monitoring or provide USAID with an objective basis for evaluating the project’s success. 11. The accomplishments noted in our report are those that have had the greatest impact. PIER did not provide us with any statistics that indicated increased mine safety or productivity. Moveover, a September 1994 study produced by the U.S. Department of the Interior indicates that mine safety in Russia is actually getting worse. The beneficiaries we spoke with indicated that they were implementing new mining methods introduced by PIER; however, they did not mention any measurable increases in productivity. In addition, although productivity and efficiency are important, overall production for the coal sector is still too high. Finally, PIER’s Moscow director stated that this project has had the greatest impact in the areas of restructuring, private sector involvement, and social safety net development. 12. Our report does not state that an interim evaluation is imminent and may lead to activities being redirected. Our report states that “USAID management admitted that no annual assessments or midterm evaluations were conducted,” even when required by the cooperative agreement. 13. USAID’s new procedures did not affect the program during the time frames we reviewed. Also, the work plan example should be taken in the context that several iterations of the plan have been submitted and revised since November 1994. 14. Our draft report did not recommend that more authority be delegated. 15. Our report was modified to show that the Ministry of Environmental Protection and Natural Resources was involved in the initial selection of project activities. However, the Ministry did not participate in designing the projects as USAID suggests. USAID acknowledges the almost immediate shift of its relationship from the central to the local government once the projects were selected. We remain concerned over the lack of federal involvement, especially regarding the provision of resources and the limited potential for replicability. We believe that without outside funding or support from the federal level, sustainability and replication will be difficult. 16. As indicated in the report, we found that as of February 1995, the Far East project had contributed little to systemic reform and is unsustainable without outside funding. The report discusses the project’s attempt to address systemic reform through efforts to maintain and restock the forestry base. 17. As indicated in the report, the deliverables identified in the delivery orders generally cited reports and studies as the results. We are unable to verify USAID’s statements regarding the project’s results. 18. Although we recognize that this project was only one of many in the energy sector, we found that the project is unlikely to contribute to systemic reform because of its design and the lack of monitoring and follow-up by USAID. 19. USAID suggested that we select this project in part because USAID represented it as a success, based on an independent evaluation. We visited only Yekaterinburg for two primary reasons: USAID suggested that it was a good site to visit and it was one of only two cities where equipment was installed and extensive energy audits and training were provided. However, as we reported, the equipment had not been installed. USAID’s assertion that our conclusion is based almost entirely on the site visit is wrong. Our conclusion is also based on discussions with representatives from RCG/Haggler Bailly, Joseph Technology, Honeywell, and USAID officials responsible for the project and our review of numerous documents on the entire project. 20. We did not make statements or draw conclusions about other projects in the program or about the overall program. We noted that there was no indication that the project we reviewed had contributed to systemic reform. The energy efficiency audits and demonstration sites can only have an impact on systemic reform if USAID ensures that (1) equipment is installed; (2) equipment and training is used; (3) the recommendations in the energy audits are implemented; (4) the results of the project are monitored, recorded, and publicized; (5) appropriate personnel have access to the demonstration sites; and (6) problems such as lack of equipment certification are corrected. At the time we conducted our fieldwork, USAID was not ensuring any of these elements because the project did not include any mechanisms for long-term monitoring or replicating the project. USAID, in its comments, acknowledged that the project alone is unlikely to have an impact on systemic reform. 21. We agree that the dollar value of the uninstalled equipment constitutes a relatively small percentage of all the equipment purchased. The fact that USAID was unaware of the problems in Yekaterinburg and had not monitored whether any cost savings had been achieved and did not know whether any systemic improvements had resulted from the equipment, energy audits, and training provided is the issue. 22. The consultant’s evaluation indicated that all of the equipment was installed in April 1993. The documents we reviewed indicate that the equipment was provided in April and May 1993. If the equipment did not arrive until June 1993 as USAID suggests, USAID should have known the evaluation had problems when it indicated that all the equipment was installed 1 or 2 months before the equipment ever arrived. 23. We contacted individuals who were identified as participants by USAID/Moscow. 24. Our conclusion that the training was irrelevant was not based on our discussions with the participants of the PIET training courses. It was based on statements by various USAID officials, including the USAID/Moscow Mission Director and the Chief of USAID/Washington’s Europe and the Newly Independent States/Health and Population Office. For example, USAID officials told us that a 2-week training course without follow-on activities could not be expected to result in any systemic reform. In addition, as we also noted in the report, because USAID had not conducted any long-term monitoring of the participants, it had no evidence that any of the participants instituted systemic changes based on the training. The opinions and views of the participants we interviewed were used to provide insight as to why no systemic changes had occurred. 25. Contrary to USAID’s comments, we referred to the course evaluations in our draft report. We stated that “. . . most participants were satisfied with the course and believed it was applicable to their work conditions.” However, our assessment was not concerned with whether the participants were satisfied with the course but with whether the goals of the PIET contract and the Freedom Support Act were fulfilled. 26. We question USAID’s assertion about the actual positive impact of the training and follow-on assistance. First, at the time of our review, follow-on assistance was not planned to begin for another 6 months and no assessment had been completed to confirm or deny USAID’s assertion. Second, despite repeated USAID statements that the Siberian participants were involved in follow-on projects, USAID project officials did not know how many were actually involved. When the contractor compiled this data for us, the USAID project official in charge of the program was surprised that 75 percent of the participants in Siberia were not involved with the planned follow-on activities. Finally, USAID health officials we spoke with were unanimous in their assessment that the follow-on project’s progress to date has been a disappointment. 27. Contrary to USAID’s assertion, we met with the Russian Privatization Center (RPC) official responsible for the project. This official questioned the competence of some of the consultants. We also believe that the characterization of this project as a qualified success is an overstatement. Project task orders were never modified, thus the focus of the project remained to increase the availability of commercial real estate. However, after project completion, large amounts of commercial real estate continued to be leased in the selected cities under conditions that encouraged inefficient use, and the municipalities failed to maximize revenues. At our most recent meeting, on June 2, Jay Kalotra presented a preliminary draft of the wrap-up memo for the project. At that time, I reminded Jay several times that deviations from the Perm’ model would have to be rigorously defended to both USAID and the senior management of the RPC. Jay’s response was that IBTCI deviated from the Perm’ model in large part because the model was ill-suited to the chosen roll-out cities. To a considerable extent, this may be true. However, this obviously does not exonerate IBTCI, since their task was to find cities where a pure roll-out could be performed. [underscoring supplied] In its most recent memos, IBTCI suggests that they told us at the outset they would adopt a broader approach than BCG took in Perm’. While it is true that IBTCI states some very ambitious goals, it is disingenuous to suggest that we agreed to replace the Perm’ model with something else. . . . . . . However, as noted above, even three weeks ago IBTCI was maintaining the pretense of Perm’-style results. And it was only when we actually visited the project sites that we could see the extent to which deviations had occurred. It is not realistic to expect the agribusiness partnerships program to have a significant, measurable impact on overall food systems in the NIS. The limited number of partnerships being supported suggests such national level impacts are unlikely during the life of the activity, if ever. USAID/Moscow staff stated that the agribusiness partnerships project could not by itself influence systemic reform. 31. Although TVG has established three partnerships in Russia, only one (Petoseed) is functioning. According to TVG staff, CTC Foods has run into financial problems; consequently, its potato processing facility may never be constructed. Finally, the third partnership (Big Sky Foods Trading, Inc.) has only recently been approved, and it is too early to determine whether this project will be successful. 32. USAID staff in Moscow and Washington characterized the project as discontinued because no more partnerships can be introduced and no more funds will be obligated. 33. USAID/Moscow staff said TVG had not performed adequately and had not identified appropriate partnerships. The documents we reviewed also indicated that TVG was not performing well. 34. Our report includes examples of the causes for delays in the approval process, including the need to deal with legal issues. 35. We modified the report to reflect this foreign policy goal. However, the primary objective for the pilot project announced in April 1993 did not focus on relocating officers from the Baltics. The announcement made in July 1993 focused the program on the removal of officers from the Baltics. 36. USAID is incorrect in stating that the oblast’ was involved in signing the memorandum for Novosibirsk. It was only signed by USAID and the municipality of Novosibirsk. Louis H. Zanardi Eugene D. Beye Edward J. George, Jr. Peter J. Bylsma David M. Bruno Jodi McDade Prosser The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the Agency for International Development's (AID) assistance projects in Russia, focusing on whether: (1) individual AID projects met their objectives and contributed to systemic reforms; (2) the projects had common characteristics that contributed to their successful or unsuccessful outcomes; and (3) AID adequately managed the Russian projects. GAO found that: (1) some of the projects reviewed fully met or exceeded their objectives, while other projects met few or none of their objectives; (2) three AID projects contributed to fundamental structural changes in Russia because they had sustainability built into their design and they focused on national or regional issues; (3) the successful projects had broad and strong support from all levels of the Russian government, U.S. contractors with long-term physical presence in Russia, a broad scope to maximize benefits, and specific sustainability objectives, and complemented or supported Russian initiatives; (4) Russian officials' commitment to reform in certain sectors was critical to project success; (5) the unsuccessful projects were poorly designed and implemented and often had little or no impact on problems; (6) AID made certain exceptions to its normal procedures and processes in its desire to respond quickly to assist Russia; and (7) AID failed to adequately manage some projects because of problems in delegating management and monitoring responsibility to the Moscow AID office, inadequate staff, and inadequate management information systems. |
The Highway Revenue Act of 1956 established the Highway Trust Fund as an accounting mechanism to help finance federal highway programs. In 1983, the Highway Trust Fund was divided into two accounts: a Highway Account and a Mass Transit Account. Receipts to the Highway Account are used to fund highway programs, through which billions of dollars are distributed to the states annually for the construction and repair of highways and related activities. Treasury uses a revenue allocation and reporting process to distribute highway user taxes to the Highway Trust Fund. Financing for the Highway Trust Fund is derived from a variety of federal highway user taxes including excise taxes on motor fuels (gasoline, gasohol, diesel, and special fuels) and tires, sales of new trucks and trailers, and the use of heavy vehicles. As table 1 shows, the excise tax rates and distribution of the tax revenues vary. The different tax rates reflect federal policy decisions. For example, in the 1970s and 1980s, the federal government adopted numerous policies to encourage the use of alternatives to imported fossil fuels and help support farm incomes. Among these policies were tax incentives that targeted the use of alcohol fuels derived from biomass materials, such as ethanol. Ethanol blended fuels (gasohol) are partially exempt from the standard excise tax on gasoline (18.4 cents). The proportion of ethanol contained in each gallon of fuel determines the size of the partial exemption. The most common ethanol blend contains 90 percent gasoline and 10 percent ethanol and is currently taxed at 13.1 cents per gallon—an exemption of 5.3 cents. The federal government also uses the distribution of excise tax receipts to different accounts to achieve policy goals. For example, a small part of the excise tax on most motor fuels is distributed to the Leaking Underground Storage Tank Trust Fund to clean-up contamination caused by underground storage tanks. Additionally, 2.5 cents of the tax received on each gallon of gasohol is transferred to the General Fund, rather than the Highway Trust Fund, for deficit reduction purposes. TEA-21 continued the use of the Highway Trust Fund as the mechanism for accounting for federal highway user taxes. TEA-21 also established guaranteed spending levels for certain highway and transit programs. Prior to TEA-21, these programs competed for budgetary resources through the annual appropriations process with other domestic discretionary programs. New budget categories were established for highway and transit spending, effectively establishing a budgetary “firewall” between those programs and other domestic discretionary spending programs. Of the $217.9 billion authorized for surface transportation programs over the 6-year life of TEA-21, about $198 billion is protected by the budgetary firewall—about $162 billion for highway programs and $36 billion for transit programs. Under TEA-21, the amount of highway program funds distributed to the states is tied to the amount of actual tax receipts credited to the Highway Account of the Highway Trust Fund. TEA-21 guaranteed specific levels of funding for highway programs from fiscal year 1999 through fiscal year 2003, on the basis of projected receipts of the Highway Account. TEA-21 also provided that beginning in fiscal year 2000, this guaranteed funding level for each fiscal year would be adjusted upward or downward through the RABA calculation as the levels of Highway Account receipts increased or decreased. To determine the RABA adjustment, the Office of Management and Budget and the Office of the Secretary in the Department of Transportation rely on information on Highway Account receipts and revised Highway Account projections supplied by Treasury. Specifically, the Bureau of Public Debt provides the actual Highway Account receipts for the prior fiscal year, and the Office of Tax Analysis (OTA) provides a projection of Highway Account receipts for the next fiscal year. On the basis of the information we reviewed, the fiscal year 2003 RABA calculation—a negative $4.369 billion—appears reasonable. The RABA adjustment for fiscal year 2003 was calculated by (1) comparing the actual Highway Account receipts for fiscal year 2001 to the projections of receipts for fiscal year 2001 included in TEA-21, and an adjustment for the RABA calculation made for that year (the look back portion of the calculation) and (2) comparing projections of Highway Account receipts for fiscal year 2003 with the projection of these receipts contained in TEA-21 (the look ahead portion of the calculation). The sum of these differences is the RABA adjustment. Table 2 shows the RABA calculations for fiscal years 2000 through 2003. As shown, the RABA adjustments for fiscal year 2000 through fiscal year 2002 were positive—increasing highway funding levels by a total of over $9 billion. However, the RABA adjustment for fiscal year 2003 is negative $4.369 billion. Eighty percent of the fiscal year 2003 RABA adjustment is attributable to the look back portion of the calculation. The actual fiscal year 2001 Highway Account receipts were about $1.6 billion lower than projections in TEA-21. According to Treasury, actual fiscal year 2001 receipts were lower than expected due to the slowdown in the economy, which especially affected heavy truck sales, and increased gasohol use. We reviewed the amounts distributed to the Highway Trust Fund for the first 9 months of fiscal year 2001, and concluded that these amounts were reasonable and adequately supported on the basis of available information. With respect to the look ahead portion of the calculation, we reviewed Treasury’s process for projecting Highway Account revenues. Although we did not independently evaluate the methodology and the economic models Treasury used to develop its revenue projections, our review of a qualitative description of the process, key inputs, and changes to the models gave us no reason to question the resulting projections. The Secretary of the Treasury transfers applicable excise tax receipts, including receipts from gasoline and other highway taxes, from the General Fund to the excise tax related trust funds, including the Highway Trust Fund, on a monthly basis. These transfers are based on estimates because actual data on which to base the allocations are not available when the deposits are initially made. OTA prepares these estimates on the basis of historical IRS certification data and actual excise tax revenue collections. Subsequently, IRS certifies the actual excise tax revenue collections that should have been distributed to the trust funds on the basis of tax returns and payment data. Using the IRS certifications, Treasury makes quarterly adjustments to the initial trust fund distributions. For example, in March 2001, Treasury made an adjustment to decrease the fiscal year 2001 excise tax revenue distributions to the Highway Trust Fund to correct for actual collections in the fourth quarter of fiscal year 2000. The certified fourth quarter receipts were $1.2 billion less than the amount initially distributed on the basis of OTA’s estimates for that quarter. According to an OTA official, OTA had calculated the original estimated transfer amounts for the quarter using an economic model that assumed a higher rate of economic growth through calendar year 2000 than was actually the case. As a result, the downward adjustment was made, reducing the fiscal year 2001 distributions to the Highway Trust Fund by $1.2 billion, which contributed to the fiscal year 2003 negative RABA adjustment. Our past reports have identified errors and problems with Treasury’s excise tax allocation process. However, Treasury has made and continues to make improvements to this process. On February 11, 2002, we issued a report on the results of procedures we performed related to the distributions of excise tax revenue to the Highway Trust Fund in fiscal year 2001. On the basis of this work, we believe the amounts distributed to the Highway Trust Fund for the first 9 months of fiscal year 2001, which were subject to IRS’ quarterly excise tax certification process and which were adjusted on the basis of this process, were reasonable and were adequately supported according to available information. Additionally, we believe the March 2001 adjustment made by Treasury to reduce fiscal year 2001 excise tax distributions to the Highway Trust Fund by $1.2 billion was reasonable and adequately supported. IRS expects to deliver the results of its certifications for distributions of excise tax revenue collected during the period July 1, 2001, through September 30, 2001 to Treasury’s Financial Management Service by March 20, 2002. Consequently, the distributions of fourth quarter fiscal year 2001 excise tax revenue were based solely on estimates prepared by OTA. We did not draw any conclusions about the reasonableness of the distributions made to the Highway Trust Fund for the fourth quarter of fiscal year 2001. One component of the look back portion of the RABA calculation is the comparison of actual fiscal year 2001 Highway Account receipts with projections of those receipts in TEA-21. The actual receipts were about $1.6 billion lower than the amounts contained in TEA-21. According to Treasury, the lower than expected highway excise tax receipts in fiscal year 2001 were due to several factors. Most importantly, the weakened economy contributed to a decline in highway excise taxes paid. All but one of the Highway Trust Fund receipt sources were lower in fiscal year 2001 than 2000. For example, tax revenue from the retail tax on trucks dropped 55 percent from fiscal year 2000 to fiscal year 2001. It is important to note that the tax is applied to the sale of new trucks only. As the economy weakened, large numbers of used trucks were placed on the market, which depressed prices and sales in the new heavy truck market. In addition to the economic downturn, the rise in the use of gasohol contributed to decreased Highway Account receipts. The amount of gasohol receipts allocated to the Highway Account rose by 17.5 percent between fiscal years 2000 and 2001, which Treasury believes is evidence of an ongoing substitution of gasohol fuels for gasoline. Because gasohol is taxed at a lower rate than gasoline and a portion of the tax on gasohol is transferred to the General Fund, increases in gasohol use and corresponding reductions in gasoline use decrease Highway Account revenues. While not the main factor, the look ahead portion of the RABA calculation also contributed to the overall negative RABA adjustment. As discussed earlier, the look ahead is the difference between TEA-21’s projections for the next fiscal year to current projections from the president’s budget, which are prepared by Treasury. Based on the general qualitative description Treasury provided us about its methodology and economic models used to develop Highway Trust Fund revenue projections, we have no reason to question the projections for fiscal year 2003. Treasury generally performs two forecasting exercises each year, including one for the president’s budget. Treasury uses seven econometric models to forecast each highway excise tax revenue source, such as the tax on gasoline. These models seek to approximate the relationship between historical tax liability and current macroeconomic variables, such as the gross domestic product. This estimated relationship is the baseline, and Treasury uses it to project future excise tax liability, given current law and the administration’s economic assumptions. After calculating future tax liability, Treasury forecasters convert the tax liability forecast to a tax receipts forecast using information on deposit rules, payment patterns, and actual collections. The administration’s economic assumptions drive the projections made with each model. According to Treasury, receipts forecasting is a policy exercise conducted for the president to show the state of the Highway Trust Fund if the administration’s economic assumptions were to come to fruition. Consequently, Treasury’s forecasts incorporate economic assumptions formulated for the budget by the “Troika,” which consists of the Council of Economic Advisors, the Office of Management and Budget, and Treasury. Because the goal is to provide a forecast consistent with these economic assumptions, the models use these assumptions directly as explanatory variables, or link other explanatory variables to the assumptions provided. Several of the administration’s economic assumptions are publicly available, such as the gross domestic product and consumer price index. However, most Troika assumptions are not publicly available. Other variables specific to the Highway Trust Fund are included in the economic models. Treasury generally obtains this information from other federal agencies. For example, Treasury incorporates USDA’s forecast of ethanol use in its gasohol model. However, according to Treasury, the forecasters must ensure that the addition of these other variables does not create inconsistencies between the projections and the administration’s assumptions. It should also be noted that Treasury does not try to predict future regulatory or legislative changes at the federal or state levels that could affect Highway Trust Fund revenue but bases its projections on current law. Any legislative or regulatory changes that affect Highway Trust Fund revenue will affect the accuracy of the forecasts. Treasury continuously updates its models to incorporate legislative, economic, and other relevant changes—which are then reflected in the next forecasting exercise. According to Treasury officials, Treasury’s modeling framework for projecting highway excise tax receipts has not changed in recent years. Treasury’s framework consists of a series of econometric models that approximate the relationship between historical tax liability and current macroeconomic variables, which are then used to project future tax liability given current law and certain economic assumptions. Although the overall framework has remained consistent, Treasury officials noted that the specific economic models used to project receipts are continuously evolving to reflect current circumstances. For example, the models are constantly updated to incorporate the most current information on tax collections and reported tax liabilities, as well as enacted legislation. In addition to these routine changes, the models have occasionally undergone other modifications. Treasury identified 15 major changes to the models since 1998. These changes ranged from moving the highway-type tire tax from an annual model to a quarterly model and revising the ethanol forecast in the gasohol model to reflect the phasing out of methyl tertiary-butyl ether (MTBE) in certain states. According to Treasury, the identified changes were designed to improve the models’ forecasting ability. Although Treasury does not use an independent reviewer to validate the models, Treasury officials noted several ways they validate them. First, the Director of Treasury’s Office of Tax Analysis reviews the results of the model for accuracy and soundness at least twice a year. Second, Treasury officials compare the projected receipts with actual receipts to assess the validity of the models. In comparing the projected and actual receipts, Treasury forecasters try to determine the cause of any substantial differences and make changes to the model, as appropriate. Third, trust fund agencies, such as FHWA, receive the forecasts semiannually and may offer comments to Treasury on the projections. In order to help determine the reasonableness of Treasury’s projection, we compared it with CBO’s forecasts. This comparison does not raise any questions about the reasonableness of Treasury’s projections. For example, despite different methodologies and assumptions, Treasury and CBO projections of Highway Account receipts for the budget window are very similar. (See fig. 1.) Both agencies forecast steady growth in receipts from fiscal years 2002 through 2012. For example, both Treasury and CBO project the average annual growth of highway-related excise taxes will be about 3 percent. In January 2002, the administration announced that the fiscal year 2003 RABA adjustment would be a negative $4.965 billion. The administration subsequently announced that an error had been made in calculating the RABA adjustment and that the correct amount was a negative $4.369 billion—a $600 million difference. The error, which was made in Treasury’s allocation of projected highway tax revenues to various accounts rather than in its economic models, affected the look ahead part of the fiscal year 2003 RABA calculation. Specifically, it occurred in Treasury’s allocation of projected revenues from gasohol sales to the General Fund, the Leaking Underground Storage Tank Trust Fund, and the Highway and Transit Accounts within the Highway Trust Fund. In short, the error resulted in the incorrect distribution of projected gasohol receipts among the funds. Because gasohol has six different blends—all with different tax rates and distributions—the gasohol allocations are complicated and require many “links” among several spreadsheets. With respect to gasohol, the Highway Account receipts are calculated after allocations for the other accounts— the Mass Transit Account, the Leaking Underground Storage Tank Trust Fund, and the General Fund—have been calculated. This is because the Highway Account is a “catch-all” for taxes not already attributed to other accounts. A misalignment occurred between the different spreadsheets used to distribute gasohol tax revenues to the different accounts, which caused too much of the gasohol revenues to be transferred to the General Fund. Therefore, the error incorrectly lowered projected Highway Account revenue beginning with fiscal year 2002. According to a Treasury official, a number of factors contributed to the error, including tightened time constraints during this budget cycle for Treasury forecasters to calculate and review their projections for the fiscal year 2003 budget. Each forecaster is responsible for reviewing his/her own calculations. In hindsight, however, this official said that the internal quality checks his office made were insufficient, especially on the gasohol calculations, which are very complex. He noted that Treasury plans to take several steps to avoid such an error in the future, including requiring another Treasury forecaster to spot check the projections. The use of gasohol instead of gasoline affects the amount of Highway Account revenue for two reasons. First, gasohol is partially exempt from the standard gasoline excise tax. Second, 2.5 cents of the tax received on each gallon of gasohol sold is transferred to the General Fund. (See fig. 2.) Based on our ongoing work, our preliminary estimates show that the partial tax exemption resulted in $3.86 billion in revenue forgone by the Highway Account during fiscal years 1998 through 2001. We also estimate that the General Fund transfer caused a reduction of $2.15 billion in Highway Account revenue during the same period. Treasury projects that gasohol use will continue to rise steadily through fiscal year 2012. According to Treasury, such an increase will occur at the expense of gasoline as some states ban the use of MTBE as an oxygenate additive. Using Treasury’s highway excise tax revenue projections, we estimate that the partial tax exemption will lower Highway Account revenue by a total of $13.72 billion from fiscal years 2002 through 2012. (See fig. 3.) We also estimate that the Highway Account will not receive $2.36 billion due to the General Fund transfer from fiscal years 2002 through 2005, when the transfer ends. In addition, if the amount of the transfer is not dedicated to the Highway Account following fiscal year 2005, we project that the Highway Account will forgo $4.56 billion from fiscal years 2006 through 2012. State or federal legislation or regulations that result in gasohol use above what is currently projected, such as a nationwide ban on MTBE, would increase the negative impact on the Highway Account absent other changes. According to USDA and ethanol industry officials, the partial tax exemption for gasohol is intended to create a demand for ethanol that will raise the price of ethanol at least to the point where producers can cover costs. These officials stated that if the partial tax exemption on ethanol was removed, the price of ethanol would no longer be competitive with gasoline and the demand would disappear. In this case, ethanol fuel production would, for the most part, not continue. Furthermore, ethanol industry officials we talked to warned that because a substantial amount of the corn grown in the United States is used for ethanol, the collapse of the ethanol industry would affect the corn and agriculture markets which could in turn affect the federal government’s agricultural support payments. As the Congress considers the reauthorization of surface transportation programs, there are several ways it could restructure the RABA adjustment to reduce fluctuations in highway funding. Furthermore, industry officials have identified a number of possible ways to increase Highway Trust Fund revenues. Ultimately, the Congress and the administration must weigh the advantages and disadvantages of changing the RABA adjustment and/or Highway Trust Fund revenue streams. The discussion that follows is not intended to show support for any possible alternatives but instead to describe some of the ways highway funding could be increased. The RABA formula as defined by TEA-21 contains look back and look ahead components that tend to accentuate the impact of any shifts in Highway Account receipts. For example, the recent downturn in the economy is reflected in several elements of the fiscal year 2003 RABA calculation. First, the actual receipts for fiscal year 2001 were lower than expected. Second, the downturn caused a need to correct for optimistic projections of fiscal year 2001 receipts made in December 1999. Third, the fiscal year 2003 projections are lower than those contained in TEA-21 because the updated projections reflect the current economic conditions. There are several changes that could be made to reduce the potential for dramatic swings in funding for highway programs but maintain a tie to actual receipts credited to the Highway Account. For example, changes to the RABA adjustment that could smooth out the impact of significant funding changes would include (1) eliminating the look ahead part of the RABA calculation, (2) averaging the look back part of the calculation over 2 years, and (3) distributing the RABA adjustments over 2 years. In figure 4, we show the actual RABA adjustments under the current structure and the adjustments that would have been made using these three options from fiscal years 2000 through 2003. As shown, the three options appear to produce less dramatic shifts in funding than the current RABA mechanism over the past four years. However, we did not analyze how these options would perform against different trust fund scenarios or economic cycles in the future. Industry groups have proposed various ways to increase Highway Trust Fund revenue such as crediting the Highway Trust Fund for the interest earned on its balances, increasing the use of tolls, and/or establishing an indexing system to help ensure that gas tax revenues are linked to inflation. Although each of these actions would increase Highway Trust Fund revenues, we have not evaluated their fiscal or public policy implications. Another way to enhance Highway Trust Fund revenues would be to increase highway excise taxes. Although no tax increase is attractive, there are some equity arguments that support an increase in certain highway user taxes. For example, for some time FHWA has reported that heavy trucks (trucks weighing over 55,000 pounds) cause a disproportionate amount of damage to the nation’s highways and have not paid a corresponding share for the cost of the pavement damage they cause. Currently, heavy vehicles are taxed at the rate of $100 per year plus $22 for every 1,000 pounds (or fraction thereof) they weigh over 55,000 pounds. However the tax is capped at $550. In 2000, we reported that the Joint Committee on Taxation estimated that raising the ceiling on this fee to $1900 could generate about $100 million per year. Mr. Chairman, this concludes my prepared remarks. I would be pleased to answer any questions you or other members of the Subcommittee may have. For questions regarding this testimony please contact JayEtta Z. Hecker on (202) 512-2834 or at [email protected]. Individuals making key contributions to this testimony included Nikki Clowers, Helen Desaulniers, Mehrzad Nadji, Stephen Rossman, Ron Stouffer, and James Wozny. To determine the reasonableness of the Revenue Aligned Budget Authority (RABA) calculation we relied in part on previous work done by GAO under an agreement with the Department of Transportation’s Inspector General which resulted in a February 2002 report: Applying Agreed-Upon Procedures: Highway Trust Fund Excise Taxes (GAO-02-379R). Under that agreement we (1) performed detailed tests of transactions that represent the underlying basis of the amounts distributed to the Highway Trust Fund, (2) reviewed the Internal Revenue Service’s quarterly certifications of these amounts, and (3) reviewed the Office of Tax Analysis’ process for estimating amounts distributed to the Highway Trust Fund in the fourth quarter of fiscal year 2001. We also interviewed knowledgeable Department of Treasury, Office of Management and Budget, and Department of Transportation officials who provided documentation and described the processes used to develop the calculation. We obtained from the Treasury’s Office of Tax Analysis (OTA) a general description of its economic models, including key inputs and changes made to the models since 1998, which are used to estimate future Highway Trust Fund revenues. Additionally, we reviewed related OTA internal analyses and reports. However, we did not evaluate or certify Treasury’s economic models that forecast future Highway Trust Fund revenues. We met with Congressional Budget Office (CBO) officials who described their process for projecting Highway Trust Fund revenues. CBO officials also provided their Highway Trust Fund revenue forecast, which we compared to Treasury’s projections. To determine how the $600 million error in the initial RABA adjustment was made, we interviewed Treasury and DOT officials. We also reviewed Treasury’s workpapers to determine the source and cause of the error. | The Highway Trust Fund "guarantees" specific annual funding levels for most highway programs on the basis of projected receipts to the fund. It also makes annual adjustments to these funding levels on the basis of actual receipts and revised projections of trust fund revenue. These adjustments are called the Revenue Aligned Budget Authority (RABA). GAO concludes that the fiscal year 2003 RABA calculation appears reasonable. Although the RABA adjustment is clearly severe, it reflects the many ways in which an economic downturn affects the calculation. In late January 2002, the administration announced that the fiscal year 2003 RABA adjustment would be a negative $4.965 billion. Within a few days of the announcement, the administration reported that an error had been made and the correct amount was a negative $4.369 billion--a $600 million difference. Treasury is taking steps to improve its internal controls in order to prevent this type of error from reoccurring. The use of ethanol blended fuel instead of gasoline reduces Highway Trust Fund revenue because it is partially exempt from the standard excise tax on gasoline and 2.5 cents of the tax received on each gallon of gasohol sold is transferred to the General Fund. Gasohol use is projected to rise and the impact of these tax provisions will grow as well. The RABA adjustment could be changed in several ways to help reduce fluctuations in highway funding. However, Congress and the administration must weigh the advantages and disadvantages of these and other ways to stabilize highway funding and increase Highway Trust Fund revenues. |
CMS, a component of the Department of Health and Human Services (HHS), administers the Medicaid program. Medicaid is the third largest social program in the federal budget and is one of the largest components of state budgets. Although it is one federal program, Medicaid consists of 56 distinct state-level programs–one for each state, territory, Puerto Rico, and the District of Columbia. Each of the states has a designated Medicaid agency that administers the Medicaid program. The federal government matches state Medicaid spending for medical assistance according to a formula based on each state’s per capita income. The federal share can range from 50 to 83 cents of every state dollar spent. In accordance with the Medicaid statute and within broad federal guidelines, each state establishes its own eligibility standards; determines the type, amount, duration, and scope of covered services; sets payment rates; and develops its administrative structure. Each state Medicaid agency is also responsible for establishing and maintaining an adequate internal control structure to ensure that the Medicaid program is managed with integrity and in compliance with applicable law. States are required to describe the nature and scope of their programs in comprehensive written plans submitted to CMS–with federal funding for state Medicaid services contingent on CMS approval of the plans. This approval hinges on whether CMS determines that state Medicaid plans meet all applicable federal laws and regulations. At the federal level, the Center for Medicaid and State Operations (CMSO) within CMS is responsible for approving state Medicaid plans, working with the states on program integrity and other program administration functions, and overseeing state financial management and internal control processes. CMSO shares Medicaid program administration and financial management responsibilities with the 10 CMS regional offices (RO). The Division of Financial Management (DFM), within CMSO’s Finance, Systems and Quality Group, has primary responsibility for Medicaid financial management. DFM, in conjunction with the 10 regions, establishes and maintains the internal control structure for Medicaid financial management and state oversight. As is the case for all major federal agency programs, the internal control structure established by CMS for Medicaid should meet requirements of Office of Management and Budget (OMB) Circular A-123, Management Accountability and Control, and the Standards for Internal Control in the Federal Government. According to Circular A-123, management controls are the organization policies and procedures used to reasonably ensure that programs are protected from waste, fraud, and mismanagement and achieve their intended results. Establishing good management controls requires, according to the circular, that agency managers take systematic and proactive measures to implement appropriate management controls, assess the adequacy of the controls, identify needed improvements, and take corresponding corrective action. The Standards for Internal Control in the Federal Government includes five standards that provide a roadmap for agencies to establish control for all aspects of their operations and a basis against which agencies’ control structures can be evaluated. The standards are defined as follows: Control environment—creating a culture of accountability by establishing a positive and supportive attitude toward improvement and the achievement of established program outcomes. Risk assessment—performing comprehensive reviews and analyses of program operations to determine if risks exist and the nature and extent of the risks identified. Control activities—taking actions to address identified risk areas and help ensure that management’s decisions and plans are carried out and program objectives are met. Information and communication—using and sharing relevant, reliable, and timely financial and nonfinancial information in managing operations. Monitoring—tracking improvement initiatives over time, and identifying additional actions needed to further improve program efficiency and effectiveness. The internal control structure and financial oversight process that CMS has designed for Medicaid includes activities for (1) approving and awarding grants to make funds available to the states for the efficient operation of the Medicaid program, (2) overseeing state financial management and internal control processes, (3) ensuring the reasonableness of budgets reported to estimate federal funding requirements, and (4) ensuring the propriety of expenditures reported for federal matching funds. DFM shares these responsibilities with about 76 regional financial analysts and branch chiefs, who report to their respective regional administrators. Figure 1 outlines CMS’s organizational structure related to Medicaid. Regional financial analysts are key to CMS financial management activities, as they are responsible for performing frontline activities to oversee state financial management and internal control processes. Some of the key oversight activities performed by regional analysts are (1) reviewing state quarterly budget estimates and expenditure reports, (2) preparing decision reports that document approvals for federal reimbursement and reimbursement deferral actions, (3) providing technical assistance to states on financial matters, and (4) serving as liaison to the states and audit entities. DFM staff in headquarters rely on regional decision reports to help determine and issue state grant awards. States submit various federal reporting forms that provide regional financial analysts with the budget and expenditure data to execute their financial management and oversight responsibilities. When the State Children’s Health Insurance Program (SCHIP) was created through the Balanced Budget Act of 1997 to provide health insurance to children of low- income families who would not qualify for Medicaid, states have been required to submit expenditure and budget data on both Medicaid and SCHIP. The Medicaid and SCHIP forms are submitted quarterly through the Medicaid and SCHIP Budget and Expenditure System (MBES). See table 1 below for a brief description of the contents of the reporting forms. Reviews of the Medicaid and SCHIP expenditure reports (CMS 64 and 21) are the primary oversight control activities performed by regional financial analysts. These reviews are used to determine if Medicaid expenditures are complete, properly supported by the state’s accounting records, claimed at appropriate federal matching rates, and allowable in accordance with existing federal laws and regulations. Regional analysts are expected to obtain knowledge about state financial management and internal control processes to aid in assessing the expenditures reported for federal reimbursement. Figure 2 shows an overview of the financial management and oversight process. Oversight of state expenditures and internal controls by CMS regional financial analysts is not the only federal oversight mechanism for ensuring the propriety of Medicaid finances. Medicaid expenditures and requisite internal controls are reviewed annually by auditors under requirements of the Single Audit Act of 1984. The Congress established the Single Audit Act to gain reasonable assurance that federal financial assistance programs are managed in accordance with applicable laws and regulations. The Single Audit Act requires audits of state and local government entities that expend at least $300,000 in federal awards annually. The results of these audits are provided to the state and responsible federal agency. The federal agency is responsible for following up with the state to ensure that the state takes action to correct the deficiencies identified from the audit. Other entities have responsibilities for routinely reviewing Medicaid finances and Medicaid internal controls. Table 2 explains various oversight activities by entities outside of CMS. Our objectives were to determine if (1) CMS has an adequate oversight process to help ensure the propriety of Medicaid expenditures, (2) CMS adequately evaluates and monitors the results of its oversight process and makes adjustments as warranted, and (3) the current CMS organizational structure for financial management is conducive to effectively directing its oversight process and sustaining future improvements. To evaluate CMS financial oversight, the control activities used to help ensure the propriety of Medicaid expenditures, and CMS’s efforts to monitor its financial oversight, we performed work at CMS regional and headquarters offices, surveyed financial management staff, and reviewed CMS manuals and other documentation, as well as audit reports. As agreed with your offices, we visited 5 of the 10 CMS regional offices (Atlanta, Boston, Chicago, New York, and San Francisco) to observe and interview the financial management staff. We selected the five regions based on geographical dispersion across the country and based on the total amount of Medicaid expenditures processed by each region. The five regions were collectively responsible for overseeing more than half of the total Medicaid expenditures for fiscal year 2000. We discussed recent program changes, which significantly increased financial management oversight activities for regional analysts. We questioned staff about the extent to which certain activities, such as focused financial management reviews, were conducted and reviewed any reports and corresponding workpapers that were available. Key CMS financial managers at headquarters in Baltimore were also interviewed to gain a comprehensive understanding of overall financial management objectives for the Medicaid program. We also discussed performance and budget reporting as well as efforts to coordinate with state auditors. We administered a Web-based survey to regional financial management members to gain a better understanding of the control activities being performed by regional offices. The survey was sent to all regional office branch chiefs and staff classified as financial analysts who are responsible for overseeing state financial management and internal controls for Medicaid. All of the 11 branch chiefs responded and 59 of the 65 analysts responded, for a 92 percent response rate—the 6 analysts who did not respond were from one regional office. The survey obtained information on how oversight for Medicaid financial management is designed and implemented, as well as the frontline staff perspective on effectiveness. Survey respondents answered questions relating to review procedures performed, use of state single audits, follow-up of audit findings, and communications with state auditors and offices of inspectors general. Many of the questions asked the analysts to respond based on their performance of activities for the period from October 1, 1999, through the date of the survey. The practical difficulties in conducting any survey can introduce errors, commonly referred to as nonsampling errors. We included steps in both the data collection and data analysis to minimize such nonsampling errors. Multiple versions of the questionnaire were pretested with regional financial analysts before the final survey was administered. A 92 percent response rate was achieved, and the respondents directly entered the responses into the database via the Internet survey. Data checks were performed and a second independent analyst reviewed computer analyses. We obtained and reviewed CMS documents and manuals that described current financial oversight activities and performance reporting previously used to monitor oversight. We reviewed audit reports that included findings related to Medicaid financial management, including the CMS/HCFA financial reports for fiscal years 1998 through 2000 and Single Audit Act reports for fiscal years 1999 and 2000. To help judge the adequacy of CMS’s Medicaid financial management oversight process, we evaluated CMS oversight against the comptroller general’s Standards for Internal Control in the Federal Government. We also consulted with state auditors during our regional site visits to obtain an understanding of their oversight activities for the Medicaid program, including the level of audit coverage given to Medicaid financial operations and the control techniques used. To determine whether CMS’s organizational structure for financial management is conducive to effectively directing its oversight process and sustaining future improvements, we interviewed the director and deputy director of the CMSO Finance Systems and Quality Group as well as managers within the Division of Financial Management. We also conducted interviews with managers at the five regional offices. In addition, we compared information that we gathered about the current organizational structure, regional and central office communications, and improvement initiatives with the standards for control environment and information and communication components of internal control as described in the Standards for Internal Control in the Federal Government. We performed our fieldwork from October 2000 through September 2001, at the CMS central office in Baltimore, Md., and the five regional offices mentioned above. We focused on the internal control processes in place during fiscal years 2000 and 2001. All work was performed in accordance with generally accepted government auditing standards. We requested written comments on a draft of this report from the administrator of CMS. These comments are reprinted in appendix I. We also received supplementary oral comments from the Director of the CMS Division of Audit Liaison. Although CMS is responsible for ensuring the propriety of over $100 billion expended annually by the federal government for Medicaid, its financial oversight process did not incorporate key standards for internal control necessary to reduce the risk of inappropriate expenditures. The comptroller general’s Standards for Internal Control in the Federal Government requires that agency managers perform risk assessments and then take actions to mitigate identified risks that could impede achievement of agency objectives. However, until recently, the oversight process that CMS used for Medicaid expenditures did not include assessments that identified the areas of greatest risk of improper payments. Therefore, CMS did not have the requisite assurance that its control activities were focused on areas of greatest risk. In addition, the controls that were in place were not effectively implemented. As a result, CMS was not deploying its limited oversight resources efficiently and effectively to detect improper expenditures. CMS managers recognized the deficiencies of its oversight and began efforts in April 2001 to develop a risk-based approach and revise control activities. However, these efforts did not specifically consider information on state financial oversight and program integrity activities such as pre- and postpayment detection methods, and payment accuracy studies and initiatives to prevent fraud and abuse, or consider advanced control techniques for detecting improper Medicaid payments. Federal internal control standards require managers to perform risk assessments to identify areas at greatest risk of fraud, waste, abuse, and mismanagement. The standards require that once risks are identified, they should be analyzed for their possible effect by estimating their significance and assessing the likelihood of losses due to the risks identified. Despite repeated auditor recommendations, CMS had not developed and implemented a systematic risk assessment method in its oversight process to help ensure that states expend federal funds in accordance with laws and to identify amounts inappropriately claimed for federal reimbursement. In April 2001, CMS took action to develop a risk assessment; however, this analysis has not yet been used to deploy resources to areas of greatest risk and requires several improvements to enhance its usefulness in the oversight process. Since 1998, financial auditors responsible for the annual financial statement audit of Medicaid expenditures have recommended that CMS implement a risk-based approach for overseeing state internal control processes and reviewing expenditures. In performing audits of CMS’s financial statements for fiscal years 1998, 1999, and 2000, auditors have noted that CMS failed to institute an oversight process that effectively reduced the risk that inappropriate expenditures could be claimed and paid. In addition, the auditors identified internal control weaknesses that increased the risk of improper payments. These weaknesses included (1) a significant reduction in the level of detailed analysis performed by regional financial analysts in reviewing state Medicaid expenses, (2) minimal review of state Medicaid financial information systems, and (3) lack of a methodology for estimating the range of Medicaid improper payments on a national level. CMS Medicaid officials attributed most of the weaknesses identified by the auditors to reductions in staff resources and the multiple oversight activities that its staff is responsible for carrying out. According to Medicaid financial managers, changes in the Medicaid program since fiscal year 1998, specifically the addition of SCHIP, created additional oversight responsibilities for CMS financial management staff. Particularly, financial analysts are required to handle more state inquiries regarding technical financial issues that must be addressed promptly. At the same time, however, financial analyst resources previously devoted to oversight activities declined. Medicaid financial managers provided us with data to show that from fiscal year 1992 to September 2000, full-time equivalent (FTE) positions for regional financial staffs declined by 32 percent from 95 to approximately 65 FTEs. At the same time, federal Medicaid expenditures increased 74 percent from $69 billion to $120 billion. On average, each of the 64 regional financial analysts is now responsible for reviewing almost $1.9 billion in federal Medicaid expenditures each fiscal year as compared to an average of about $0.7 billion a decade ago. Figure 3 depicts the decrease in financial analysts (i.e., FTEs) and the increase in Medicaid expenditures between the years 1992 and 2000. Until recently, Medicaid financial managers had not taken action to implement a risk-based approach for Medicaid financial oversight. Managers stated that the Medicaid financial oversight process had been based on the presumption that financial analysts adequately applied the inherent knowledge of program risks acquired from years of experience in reviewing state Medicaid expenditures and providing technical assistance to states in operating their Medicaid programs. However, as Medicaid program expenditures have increased, CMS managers acknowledged that they needed to revise their oversight approach. As a result, during our review, CMS began in April 2001 to develop a risk-based approach for determining how best to deploy its resources in reviewing Medicaid expenditures. The Medicaid risk assessment effort required each regional office to provide data on the states and territories in its jurisdiction based on regional analyst experience and knowledge. For each type of Medicaid service and administrative expense, the Medicaid risk analysis estimates the likelihood of risk based on the dollar amount expended annually and measures the significance of risk based on factors such as unclear federal payment policy, state payment involving county and local government, and results of federal audits. The risk analysis provides a risk score for each state that is intended to specify the Medicaid service and administrative expense categories that are of greatest risk for improper payments in the state. Medicaid financial managers also tabulated a national risk score for each type of Medicaid service and administrative expense using the state risk scores. However, CMS had not taken steps to use the risk analysis in deploying its regional financial oversight resources. Medicaid financial managers in headquarters and the regional offices plan to develop work plans that will allocate resources based on the risks identified from its analysis. CMS expects to implement these work plans in reviewing the state’s quarterly expenditure reports for fiscal year 2003. In evaluating the Medicaid risk analysis, we considered strategies that leading organizations used in successfully implementing risk management processes. Two such strategies, which are included in our executive guide, Strategies to Manage Improper Payments are as follows. Information developed from risk assessments should help form the foundation or basis upon which management can determine the nature and type of corrective actions needed, and should give management baseline data for measuring progress in reducing payment inaccuracies and other errors. Management should reassess risks on a recurring basis to evaluate the impact of changing conditions, both external and internal, on program operations. While the Medicaid risk analysis is a good start, we identified several improvements that should be made to the assessment before it is used in deploying resources. The issues we identified could hinder the quality of baseline information gathered and, accordingly, affect management’s ability to thoroughly reassess risks and measure the impact of corrective actions on a recurring basis. First, the analysis does not sufficiently take into account state financial oversight activities in assessing the risks for improper payments in each state. Regional financial analysts were instructed to rate the adequacy of each state Medicaid agency’s financial oversight as one of the risk factors in determining the likelihood and significance of risk in each state. The analysts were instructed to consider whether a state regularly reviews claims submitted by local government entities that provide Medicaid services and whether state audits were conducted regularly. However, the analysts were not specifically instructed to consider states’ use of (1) prepayment edits and reviews to help prevent improper payments, (2) screening procedures to prevent dishonest providers from entering the Medicaid program, (3) postpayment reviews to detect inappropriate payments after the fact, and (4) payment accuracy studies to measure the extent of improper payments. Several states have implemented cost-effective prevention efforts to protect Medicaid program dollars, such as prepayment computer “edits,” manual reviews of claims before payment, and thoroughly checking the credentials of individuals applying to be program providers. Table 3 shows examples of prepayment reviews currently being used by some states. Many states have also developed postpayment detection systems and payment accuracy studies to improve their ability to detect, investigate, and measure potential improper payments. Kentucky and Washington, for example, have hired private contractors to develop or use advanced computer systems to analyze claims payment data that identified several million dollars in overpayments. Table 4 describes these and other state postpayment efforts and related program savings. While regional financial analysts may know about many activities like these from performing their oversight responsibilities, analysts or staff in the Division of Financial Management did not collect and document information on the nature and results of each state’s financial oversight activities. Without such information being documented, CMS did not have a complete picture or profile of the level of risk for improper payments in each state and thus did not have comprehensive information to determine the appropriate level of federal oversight that should be applied. A second deficiency we found in the Medicaid risk analysis is that it did not specifically integrate information about state fraud and abuse prevention efforts in making risk assessments for each state. Regional financial analysts were instructed to report on the level of regional oversight of each state’s Medicaid finances as one of the risk factors in determining the likelihood and significance of risk in each state. Specifically, analysts were instructed to consider the last time the regional office or HHS/OIG conducted a review or audit. However, the analysts were not specifically instructed to consider results from reviews of state efforts to prevent fraud and abuse recently conducted under the CMS Medicaid Alliance for Program Safeguards. In 1997, CMS established the Medicaid Alliance for Program Safeguards staffed with program analysts from the 10 CMS regions and staff within the Policy Coordination and Planning Group of the Center for Medicaid and State Operations. The initiative was started to aid states in their program integrity efforts. Since its inception, a fraud statute Web site has been established and seminars on innovations and obstacles in safeguarding Medicaid have been developed. In fiscal year 2000, regional staff conducted structured site reviews of program safeguards in eight states, and in fiscal year 2001 reviews were conducted in another eight states. Plans are to perform reviews in additional states until all states are covered. These reviews examined how state Medicaid agencies identify and address potential fraud or abuse, whether state agencies are complying with appropriate laws and regulations—such as how they check to ensure that only qualified providers participate in the program—and potential areas for improvement. CMS would gain valuable information from these reviews to more accurately assess the level of risk for improper payments in these 16 states and the appropriate level of federal oversight required. A third deficiency we found is that the Medicaid risk analysis did not include mechanisms to ensure that such analysis would be conducted continuously in directing financial oversight. Agency managers should have methods in place to revisit risk analysis to determine where risks have decreased and where new risks have emerged, as identified risks are addressed and control activities are changed. As such, risk analysis should be iterative. Medicaid financial managers had not determined how they would continuously revise and update their Medicaid risk analysis. Finally, the Medicaid risk analysis would be strengthened if states were systematically estimating the level of improper payments in their programs. Identifying the dollar amount of improper payments is a critical step in determining where the greatest problems exist and the most cost-beneficial approach to addressing the problems. CMS management has recognized this and has begun efforts to develop an approach for estimating improper Medicaid payments. In September 2001, nine states responded to a CMS solicitation to participate in pilot studies to develop payment accuracy measurement methodologies. The objective is to assess whether it is feasible to develop a single methodology that could be used by the diverse state Medicaid programs and to explore the feasibility of estimating the range of improper Medicaid payments on a national level. Each of the nine states involved is developing a different measurement methodology. CMS has assigned a senior Medicaid manager with responsibility for directing this effort. According to this manager, CMS has hired a consultant experienced in program integrity reviews to oversee the state pilots. CMS managers expect the states to complete the pilots during fiscal year 2003, after which time the consultant and the Medicaid manager plan to select several of the state methodologies as test cases for fiscal year 2004. It is important that CMS continues to place emphasis on development of these payment accuracy reviews on a state-by-state basis and ultimately on a national level, since this is a key baseline measure for managing improper payments in the Medicaid program. The comptroller general’s Standards for Internal Control in the Federal Government states that managers must establish adequate control activities to address identified risks and ensure that program objectives are met. Internal control activities are the policies, procedures, techniques, and mechanisms that help ensure that management’s directives to mitigate risk are carried out. Control activities are an integral part of an organization’s efforts to address risks that lead to fraud and error. For the Medicaid program, both the states and federal government share responsibility for ensuring that adequate control activities are in place. The control activities that CMS had in place to oversee state internal controls and help ensure the propriety of Medicaid expenditures were not effectively implemented. Given the current level of resources and the size and complexity of the program, a different approach is needed that incorporates new oversight techniques and strategies, as well as the results of the risk assessment discussed previously. CMS regional financial analysts are tasked with performing multiple control activities designed to (1) oversee state financial management and internal control processes, (2) help ensure that states expend federal funds in accordance with laws, and (3) identify amounts inappropriately claimed for federal reimbursement. These activities include providing technical assistance to states on a variety of financial issues to help improve state accountability and help prevent payment inaccuracies as well as examining state expenditures to defer improperly supported payments and disallow those payments that do not comply with Medicaid regulations. Analysts also are responsible for following up on and resolving findings from audits related to improper or questionable payments and weaknesses in state internal controls. Table 5 summarizes the control activities that regional analysts are responsible for carrying out. As Medicaid expenditures have grown and resources devoted to Medicaid financial oversight have decreased, regional financial analysts have faced significant challenges in monitoring state internal controls, providing technical assistance, scrutinizing expenditures, and following up on audit findings for all state Medicaid programs. In an attempt to address these challenges, in 1994 regional offices began refocusing oversight activities from emphasizing detailed review of Medicaid expenditure data to increasing the level of technical assistance provided to states. However, auditors of CMS financial statements found that, as a result, regional offices were not providing appropriate review and oversight of state Medicaid programs. As mentioned previously, auditors have reported since 1998 that regional offices significantly reduced or inconsistently performed control activities to detect potential errors and irregularities in state expenditures, thus increasing the risk that errors and misappropriation could occur and go undetected. In our review, we found that these weaknesses were still present. In August 2001, we conducted a survey of regional financial analysts to obtain their perspectives on the design and implementation of the Medicaid financial oversight process, covering the period from October 1, 1999, through the date of the survey. In comments to the survey, some regional analysts indicated that they were inundated with responsibility for multiple control activities and unable to perform them effectively. Our survey asked the analysts to rate each of the control activities that they perform in terms of how important they believe the activity is in overseeing state Medicaid programs. The activity rated most important was quarterly expenditure reviews performed on-site at state Medicaid agencies; 89 percent rated the activity as having the “highest” or “high” level of importance—83 percent “highest” and 6 percent “high.” However, when asked about the adequacy in which they performed on-site expenditure reviews, almost 36 percent rated the adequacy of their performance “inadequate” or “marginal”—13 percent inadequate and 23 percent marginal. In discussions with regional financial analysts during our site visits and in comments to our survey, many financial analysts attributed deficiencies in quarterly reviews to inadequate staff resources, the low priority placed on financial management oversight, lack of training, and conflicting priorities. During our site visits we interviewed 11 regional financial analysts responsible for overseeing the five states that accounted for over $70 billion in Medicaid expenditures in fiscal year 2000. We reviewed these analysts’ workpapers related to their review of quarterly expenditure reports submitted for the quarter ended December 31, 2000. Workpapers prepared for three of the states to document their reviews did not contain sufficient evidence that expenditures had been traced to original documents. Instead, the analysts had checked information against summary schedules prepared by the states. Without proper documentation, there is little assurance that these reviews are being adequately performed. Survey respondents also rated activities to (1) defer and disallow Medicaid expenditures and (2) perform in-depth analysis of specific Medicaid costs where problems have been found (i.e., focused financial management reviews) as important in overseeing the propriety of Medicaid expenditures. Some 89 percent of analysts rated deferral and disallowance determinations as having “highest” or “high” level of importance and focused financial management reviews were rated by 77 percent as “highest” or “high.” Data provided by CMS indicate, however, that the amount of Medicaid expenditures disallowed by regional analysts has declined in years after 1996, when oversight emphasis shifted from detailed reviews, and so did the number of focused financial management reviews conducted each year. For example, from 1990 through 1993, analysts disallowed on average $239 million annually in expenditures reported by states for federal reimbursement. However, from fiscal years 1997 through 2000, analysts disallowed on average about $43 million annually, which represents an 82 percent decline from previous years. Also, during these periods, Medicaid expenditures went from an average of $58 billion annually to $106 billion annually—an increase of 83 percent. Similarly, focused financial management reviews have declined. Focused financial management reviews generally involve selecting a sample of paid claims for review related to certain types of Medicaid services provided. These reviews have been useful in identifying unallowable costs outside of those detected through the review of quarterly expenditure reports as well as deficiencies in states’ financial management policies. According to CMS managers, in fiscal year 1992, analysts performed approximately 90 in- depth reviews of specific Medicaid issues that identified approximately $216 million in unallowable Medicaid costs. In fiscal year 2000, analysts only performed eight focused financial management reviews, but these reviews resulted in almost $45 million in disallowed costs—an average of about $5.6 million per review. As demonstrated, this control activity is effective in detecting unallowable Medicaid costs; however, it must be consistently performed for cost savings to be discovered. According to the director of DFM, the division is taking actions to improve oversight by beginning a comprehensive assessment of CMS’s Medicaid oversight activities. The division would like to increase several oversight activities, such as focused financial management reviews, to address the risks identified in CMS’s new risk-based approach. However, Medicaid financial managers are concerned that efforts to effectively address identified risks may be hindered without additional oversight resources. In the interim, CMS plans to use the current oversight process (i.e., quarterly expenditure reviews and technical assistance) for targeting those Medicaid issues that the new risk analysis identifies. In assessing what steps CMS could take to more efficiently and effectively carry out its responsibility on the federal level for helping ensure the propriety of Medicaid finances, we considered strategies that other entities have used in successfully addressing risks that lead to fraud, error, or improper payments. As discussed in our executive guide on strategies to manage improper payments, key strategies include taking action to select appropriate control activities based on an analysis of the specific risks facing the organization, taking into consideration the nature of the organization and the environment in which it operates; perform a cost-benefit analysis of potential control activities before implementation to ensure that the cost of the activities is not greater than the benefit; and contract out activities to firms that specialize in specific areas like neural networking, where in-house expertise is not available. Our executive guide points out that many organizations have implemented control techniques, including data mining, data sharing, and neural networking, to address identified risk areas and help ensure that program objectives are met. These techniques could help CMS better utilize its limited resources in applying effective oversight of Medicaid finances at the federal level. Some state Medicaid agencies have already implemented data mining, data sharing, and neural networking techniques to carry out their responsibilities on the state level for ensuring Medicaid program integrity. State auditors and HHS/OIG staff have also had success using these techniques in overseeing state Medicaid programs. However, resources devoted to protecting Medicaid program integrity and the use of these techniques varies significantly by state. From a federal standpoint, CMS should take into consideration the control activities performed at the state level in designing its Medicaid financial oversight control activities. CMS should use the results from states that are already using data mining, data sharing, and neural networking techniques in determining the extent and type of control techniques that its regional financial analysts should use in overseeing each state. And, for states where these techniques are not being used, CMS should consider using these tools in its oversight process. As illustrated in the following examples, data mining, data sharing, and neural networking techniques have been shown to achieve significant savings by identifying and detecting improper payments that have been made. Data mining is a technique in which relationships among data are analyzed to discover new patterns, associations, or sequences. The incidence of improper payments among Medicaid claims can, if sufficiently analyzed and related to other Medicaid data, reveal a correlation with a particular health care provider or providers. Using data mining software, the Illinois Department of Public Aid, in partnership with HHS/OIG, identified 232 hospital transfers that may have been miscoded as discharges, creating a potential overpayment of $1.7 million. Data sharing allows entities to compare information from different sources to help ensure that Medicaid expenditures are appropriate. Data sharing is particularly useful in confirming the initial or continuing eligibility of participants and in identifying improper payments that have already been made. We recently reported on a data sharing project called the Public Assistance Reporting Information System interstate match (PARIS) that has identified millions of dollars in costs savings for states. PARIS helps states share information on public assistance programs, such as Food Stamps and eligibility data for Medicaid, to identify individuals who may be receiving benefits in more than one state simultaneously. Using the PARIS data match for the first time in 1997, Maryland identified numerous individuals who no longer lived in the state but on whose behalf the state was continuing to pay a Medicaid managed care organization (MCO) as part of the MCO’s prospective monthly payment. The match identified $7.3 million in savings for the Medicaid program. Neural networking is a technique used to extract and analyze data. A neural network is intended to simulate the way a brain processes information, learns, and remembers. For example, this technique can help identify perpetrators of both known and unknown fraud schemes through the analysis of utilization trends, patterns, and complex interrelationships in the data. In 1997, the Texas legislature mandated the use of neural networks in the Medicaid program. Large volumes of medical claims and patient and provider history data are examined using neural network technology to identify fraudulent patterns. The Texas Medicaid Fraud and Abuse Detection System used neural networking to recover $3.4 million in fiscal year 2000. Based on consultations with state auditors, we noted that some auditors are performing audits that incorporate the advanced oversight techniques described above. New York and Texas are instituting data sharing and matching techniques at the state level to confirm initial eligibility of Medicaid participants and to identify improper payments that have already been made. Texas is using private contractors to design, develop, install, and train staff to use a system intended to integrate detection and investigation capabilities. This system includes a neural network that will allow the state to uncover potentially problematic payment patterns. Similarly, a large portion of the audit work that the HHS/OIG conducts to oversee the Medicaid expenditures for Massachusetts, Ohio, and Maine is conducted through electronic data matches of Medicaid claims data contained in the Medicaid Statistical Information System (MSIS). MSIS is the primary source of Medicaid program statistical information. As of the date of our report, 47 states were submitting Medicaid data electronically to MSIS. Information that the HHS/OIG finds as a result of electronic data matches is subsequently made available to regions and states for additional detailed work. CMS managers acknowledge that systems like MSIS could provide them with the capabilities to implement more advanced control techniques. While implementing control techniques such as data sharing, data mining, and neural networking may require up-front investment of resources, use of these techniques has the potential to result in significant savings to the Medicaid program. Having mechanisms in place to monitor the quality of an agency’s performance in carrying out program activities over time is critical to program management. The federal internal control standard for monitoring requires that agency managers implement monitoring activities to continuously assess the effectiveness of control activities put in place to address identified risks. Monitoring activities should include procedures to ensure that findings from all audits are reviewed and promptly resolved. The standards also state that pertinent information should be recorded and communicated to managers and staff promptly, to allow effective monitoring of events and activities as well as to allow prompt reactions. However, CMS had few mechanisms in place to continuously monitor the effectiveness of its control activities in overseeing the Medicaid program and collected limited information on the quality of Medicaid financial oversight performance. Specifically, CMS had not established performance standards to measure the effectiveness of its control activities, in particular its expenditure review activity. In addition, the CMS audit resolution process did not ensure that audit findings were resolved promptly and did not collect sufficient information on the status of audit findings. Without effective monitoring, CMS did not have the information needed to help assure the propriety of Medicaid expenditures. DFM financial managers responsible for monitoring the effectiveness of Medicaid internal control processes had established few mechanisms to do so. CMS did not establish performance standards and did not analyze or compare trend information on the results of its control activities, including the amount and type of Medicaid expenditures deferred and disallowed by regional analysts across all 10 regions. Medicaid financial managers told us that, before 1993, CMS collected information to monitor the performance of its oversight process. The performance reporting process required each region to submit quarterly data on the amount of expenditures disallowed; the number of focused financial management reviews conducted, and the related expenditures identified and recovered as a result of the reviews; the amount of inappropriate expenditures averted by providing technical assistance to states before payment; the number of regional financial analysts and related salary costs devoted to financial oversight; and the amount of travel dollars devoted to Medicaid financial oversight. Medicaid financial managers in DFM used this information to prepare national performance reports that calculated a return on investment for each region and a national return on investment. CMS managers said that they discontinued efforts to collect, analyze, and maintain performance data after 1993 because of staff reductions in the regions and headquarters. DFM managers currently collect some performance information, but it is not used to evaluate regional performance. For example, staff in DFM collect information on the amount of expenditures deferred and disallowed each quarter by each region. These data are used to adjust total expenditures for financial reporting purposes but not to assess regional oversight activities. DFM also maintains a spreadsheet that includes information on the types of expenditures disallowed. This information is not distributed to regional analysts. In addition, information on the types of expenditures deferred by each regional analyst is not consolidated and disseminated across regions. Regional analysts include the types of expenditures deferred in their own regional decision reports, but do not have the benefit of nationwide information because DFM does not prepare summary reports. Comprehensive information on the type of expenditures deferred and disallowed would help identify the types of Medicaid expenditures for which improper payments commonly occur and measure whether corrective actions or control techniques applied to certain Medicaid expenditures are effective in reducing improper payments. The director of DFM told us that steps would be taken within the next year to begin monitoring the effectiveness of the Medicaid financial oversight process. Medicaid financial managers plan to reinstitute the performance reporting process that was in place prior to 1993. While this is a good step, the previous performance reporting process lacked several elements necessary for effective internal control monitoring. For example, the performance reporting process did not establish agency-specific goals and measures for evaluating regional performance in reducing payment errors and inaccuracies. In addition, there were no formal criteria or standard estimation methodologies for regions to use in measuring the amount of unallowable costs that the states avoided because of technical assistance provided before payment. As discussed in our executive guide, Strategies to Manage Improper Payments, establishing such goals and measures is key to tracking the success of improvement initiatives. Standards for Internal Control in the Federal Government requires that agencies’ internal control monitoring activities include policies and procedures to ensure that audit and review findings are promptly resolved. According to the standards, agency managers should implement policies and procedures for reporting findings to the appropriate level of management, evaluating the findings, and ensuring that corrective actions are taken promptly in response to the findings. In our review, we found that the audit resolution and monitoring activities performed by CMS and its regional offices were limited. In addition, we found that audit resolution activities were inconsistently performed across regions. Further, pertinent information was not identified, documented, and distributed among those responsible for audit resolution. These conditions hamper CMS’s ability to resolve audit findings promptly and slow the recovery of millions of dollars in federal funds due from the states. Within CMS, three units share responsibility for audit resolution activities related to the Medicaid program. These are regional administrators and regional financial analysts, the Division of Audit Liaison (DAL), and DFM. Regional administrators and regional financial analysts have responsibility to perform the following audit resolution activities required by the HHS Grants Administration Manual: coordinate resolution of findings with the pertinent auditee (i.e., state Medicaid agency or providers); ensure that the related questioned costs due the federal government are recovered within established timeframes; verify that corrective actions have been developed and implemented for prepare quarterly reports documenting the status of audit resolution. DAL is responsible for maintaining a tracking system for each audit report and related findings, monitoring the timeliness and adequacy of audit resolution activities, distributing all audit clearance documents, and preparing monthly reports on the status of audit resolution and collection activities. DFM has one headquarters staff person responsible for coordinating and interacting with DAL and regional analysts to ensure that Medicaid related findings are resolved. An important part of regional analyst audit resolution activities involves following up on state Single Audit Act reports. Under the Single Audit Act, state auditors issue reports that include assessments of the internal controls related to major federal programs, including the Medicaid program, and compliance with laws, regulations, and provisions of contract or grant agreements. These reports generally include findings related to weaknesses identified in the financial management of state Medicaid programs as well as expenditures deemed erroneous or improper (e.g., questioned costs) for which states may owe money back to the federal government. Regional analysts are responsible for resolving audit findings, including determining whether the questioned costs related to audit findings reported by state auditors represent actual costs to be recovered from the state, and ensuring that they are actually recovered. In our discussions with regional staff during our review of state single audit findings, analysts admitted that they spend very little time on resolving state audit findings due to competing oversight responsibilities. Audit follow-up is one step of many performed during their quarterly state Medicaid expenditure reviews. As a result, state single audit findings are not always resolved, and related questioned costs are not promptly recovered. For example, we identified questioned costs totaling $24 million that had not been recovered. The audit reports that included the $24 million in questioned costs had been issued for years prior to fiscal year 1999. However, as of September 30, 2001, regional analysts had not completed actions to recover these costs. In addition, we found that, as of September 30, 2001, regional analysts had not determined whether corrective actions had been developed and/or implemented to resolve 85 of a total of 288 Medicaid findings included in state single audit reports for fiscal year 1999. These findings related to problems with state financial reporting, computer systems, and cash management. Lack of timely follow-up on financial management and internal control issues increases the risk that corrective actions have not been taken by the auditee and erroneous or improper payments are continuing to be made. In our review, we also found that the regional financial analysts inconsistently followed procedures for monitoring, tracking, and reporting on the resolution of Single Audit Act and HHS/OIG audit findings. For example, 3 of the 10 regions had not prepared quarterly status reports that are intended to provide information on corrective actions that states have taken to resolve audit findings. Further, pertinent information was not identified, documented, and distributed among those responsible for audit resolution. The internal control standard related to information and communication provides that pertinent information be identified, captured, and distributed to the appropriate areas in sufficient detail and at the appropriate time to enable the entity to carry out its duties and responsibilities efficiently and effectively. In our review, we found that the monthly report prepared by DAL that is intended to provide a complete list of all audits with unresolved Medicaid findings did not meet this standard. We analyzed a list provided by the HHS/OIG, which included 23 Medicaid related reports issued by the HHS/OIG and state auditors in fiscal year 2001. We found four reports from the HHS/OIG list that were not included in DAL monthly reports related to the second, third, and fourth quarters of that year. This information is critical and must be distributed to the regions to ensure that they are taking action to resolve all Medicaid related findings. We also found that the regions did not document information critical to tracking unresolved audits in their regional quarterly status reports. The regions reported which audits had been resolved. They did not report information on audits that they were reviewing that had not yet been resolved. This makes it difficult to track audit status. A sound organizational structure is a key factor that contributes to whether agency management can establish a positive control environment. Standards for Internal Control in the Federal Government provides that managers should ensure that an agency organizational structure is appropriate for the nature of its operations and designed so that authority and internal control responsibility is defined and well understood. Although CMS’s 10 regional offices are the federal government’s frontline for overseeing state Medicaid financial operations and expenditures, there are no reporting lines to the headquarters unit responsible for Medicaid financial management and few other mechanisms to ensure performance accountability. This structural relationship has created challenges in (1) establishing and enforcing minimum standards for performing financial oversight activities, (2) routinely evaluating the regional office oversight, and (3) implementing efforts to improve financial oversight. As a result, CMS lacks a consistent approach to monitor and improve performance among the units that share responsibility for financial management and ingrain a sound internal control environment for Medicaid finances throughout CMS. During the time of our review, there were no formal reporting relationships between the regional financial analysts and CMSO’s DFM or any other division or unit within CMSO. Regional offices reported directly to the CMS administrator through their respective regional administrators. This structural relationship does not lend itself to instituting standards for oversight control activities that can be consistently and effectively implemented. To illustrate, the CMS financial management strategy workgroup, headed by the director of DFM, updated guidance for expenditure reviews in September 2000 to provide uniform review procedures and address concerns raised by auditors about the inconsistency in expenditure reviews across regions. While the guide strongly encouraged regional analysts to complete all of its procedures, it did not mandate that analysts do so. Headquarters financial managers do not have direct authority to enforce such a directive and regional managers have discretion in how resources are utilized. Similarly, the guide allowed regional branch managers wide discretion in performing supervisory review of regional analysts’ expenditure review workpapers. The guide provides that a supervisor can assure that the analysts’ work measures up to CMS requirements in the review guide by either directly and selectively reviewing the work papers or by obtaining written or verbal assurance from the reviewer that the procedures have been completed. Supervisory reviews are a key internal control activity. By allowing supervisors to satisfy this responsibility merely with verbal assurance, CMS is minimizing the effectiveness of this basic control. During our site visits, we found evidence that supervisory reviews were not conducted. We reviewed regional analysts’ workpapers related to reviews of quarterly expenditure reports for five states submitted for the quarter ended December 31, 2000. These five states represent the largest states within the regions visited. Analysts’ workpapers for three of the five state quarterly expenditure reviews had no evidence of supervisory “sign off” and, when asked if the supervisors had reviewed the workpapers or discussed the results of the review, the analysts said they had not. The CMS organizational structure also hindered efforts to evaluate and monitor regional office performance. Currently, there are few formal requirements for regions to report to headquarters and CMS does not collect, analyze, or evaluate consistent information on the quality of regional financial oversight for Medicaid across the country. As mentioned previously, efforts to monitor performance were discontinued because regional staff resources were not available to collect and submit the data to headquarters managers. Headquarters managers did not have the authority to require regions to collect such data. As a result, Medicaid financial managers in headquarters were not in a position to provide formal feedback to region financial management staff to improve their performance and therefore have not been in a position to assess the effectiveness of Medicaid oversight activities. The current organizational structure also poses challenges to implementing corrective actions aimed at addressing oversight weaknesses and improving accountability. Over the past 2 years, headquarters financial managers have taken steps to develop and implement improvements to the financial oversight process. As previously mentioned, Medicaid staff are currently developing risk analysis to identify expenditures of greatest risk, working with states to develop methodologies for estimating Medicaid developing work plans that guide efforts to allocate financial oversight staff and travel resources based on the risk analysis, and developing performance-reporting mechanisms. Medicaid staff have also recently formed a financial management strategy workgroup of headquarters and regional financial management staff members to review the entire Medicaid financial oversight process and determine the proper structure for an adequate oversight process, updated its expenditure and budget review guides, and gathered information on how regional financial analyst staff time is allocated between oversight responsibilities. Headquarters DFM managers recognize that regional office commitment is critical to successfully implementing and sustaining its improvement initiatives. The current structural relationship could diminish the chances of such success. Headquarters managers expressed concern that despite recent efforts to develop risk analysis and implement work plans that allocate resources based on identified risks, regional managers will still have the authority to decide how oversight resources are used. Given the multiple oversight activities that regional financial analysts are responsible for, headquarters managers have no assurance that review areas included in the work plans will be given priority in each region. Headquarters managers may experience similar difficulties in reestablishing performance reporting. According to one senior Medicaid manager, some regions have already petitioned headquarters managers not to use data on the amount of expenditures deferred and disallowed in gauging performance. During our review, we asked regional financial analysts about several recent improvement initiatives to gauge their knowledge of and participation in such initiatives. Several analysts we spoke with during site visits did not think the risk assessment effort was useful because they felt that they were already aware of the risks within the states that they were responsible for and did not need a formal assessment to identify the risks. In addition, some said that they resented the headquarters managers trying to tell them where they needed to focus their efforts. In our survey, we asked regional financial analysts to rate the importance of the risk assessment, staff time allocation effort, and review guide updates to overall financial oversight. Approximately 50 percent of survey respondents thought the initiatives were of marginal or little importance. During pretests of our survey, several analysts said they did not understand the purpose of the initiatives, even though they had provided input. According to the analysts, no one had communicated to them how the information was going to be used. In discussions with headquarters managers, they acknowledged that a written plan or strategy, which describes the initiatives and the responsibility for implementing them, is currently being drafted. Such a plan or strategy could be very useful in soliciting regional analyst support. More important, headquarters managers acknowledged that performance accountability mechanisms for the regions are needed to implement improvements successfully. CMS is currently planning some changes that may improve mechanisms to hold CMS financial managers, including regional managers and administrators, accountable for critical tasks. A Restructuring and Management Plan recently developed by the CMS chief operating officer seeks to add specific responsibilities that are tied to specific agency goals into senior managers’ performance agreements. CMS has not determined how Medicaid financial management oversight and the various aspects of oversight responsibilities that can be evaluated will be included in the plan. Inclusion of such information is key to establishing a sound internal control environment for Medicaid finances throughout CMS. While CMS is taking steps to improve its financial oversight of the Medicaid program, the increasing size and complexity of the program, coupled with diminishing oversight resources, requires a new approach to address these challenges. Developing baseline information on Medicaid issues at greatest risk for improper payments and measuring improvements in program management against that baseline are key to achieving effective financial oversight. Determining the level of state activities to monitor and control Medicaid finances is also critical to CMS determining the extent and type of control techniques as well as the amount of resources it must apply at the federal level to adequately oversee the program. Establishing clear lines of authority and performance standards for CMS oversight would also provide for a more efficient, effective, and accountable Medicaid program. CMS’s ability to make the kind of changes that are needed will require top-level management commitment, a comprehensive financial oversight strategy that is clearly communicated to all those responsible for program oversight, and clear expectations for implementation of the changes. Recommendations for To strengthen Medicaid internal controls and the financial oversight Executive Action process that CMS has in place to ensure the propriety of Medicaid finances, we make the following recommendations to the CMS administrator. We recommend that the CMS administrator revise current risk assessment efforts in order to more effectively and efficiently target oversight resources towards areas most vulnerable to improper payments by collecting, summarizing, and incorporating profiles of state financial oversight activities, that include information on state prepayment edits, provider screening procedures, postpayment detection efforts, and payment accuracy studies; incorporating information from reviews of state initiatives to prevent Medicaid fraud and abuse; developing and instituting feedback mechanisms to make risk assessment a continuous process and to measure whether risks have changed as a result of corrective actions taken to address them; and completing efforts to develop an approach to payment accuracy reviews at the state and national levels. In addition, we recommend that the CMS administrator restructure oversight control activities by increasing in-depth oversight of areas of higher risk as identified from the risk assessment efforts and applying fewer resources to lower risk areas; incorporating advanced control techniques, such as data mining, data sharing, and neural networking, where practical to detect potential improper payments; and using comprehensive Medicaid payment data that states must provide in the legislatively mandated national MSIS database. We also recommend that the CMS administrator develop mechanisms to routinely monitor, measure, and evaluate the quality and effectiveness of financial oversight, including audit resolution, by collecting, analyzing, and comparing trend information on the results of oversight control activities particularly deferral and disallowance determinations, focused financial reviews, and technical assistance; using the information collected above to assess overall quality of financial management oversight; identifying standard reporting formats that can be used consistently across regions for tracking open audit findings and reporting on the status of corrective actions; and revising DAL audit tracking reports to ensure that all audits with Medicaid related findings are identified and promptly reported to the regions for timely resolution. Finally, we recommend that the CMS administrator establish mechanisms to help ensure accountability and clarify authority and internal control responsibility between regional office and headquarters financial managers by including specific Medicaid financial oversight performance standards in senior managers’ performance agreements; and developing a written plan and strategy, which clearly defines and communicates the goals of Medicaid financial oversight and responsibilities for implementing and sustaining improvements. CMS provided written comments on a draft of this report (reprinted in app. I), as well as supplementary oral comments. In its written comments, CMS outlined a series of actions it has begun to take to address its Medicaid financial management challenges. In supplementary oral comments, CMS disagreed with our recommendations related to its audit tracking and resolution reports. In outlining actions taken to address Medicaid financial management challenges, CMS stated that its efforts substantially address, within current resource constraints, the four areas of our recommendations. CMS improvement efforts include (1) a structured financial workplan process that has been incorporated into its formal Restructuring and Management Plan, (2) actions to strengthen exchange of information with state oversight agencies, and (3) pilot projects aimed at clarifying authority and internal control responsibility between regional and headquarters managers. As many of these efforts are in the planning or early implementation stages, it is too soon to conclude whether they will effectively address our recommendations and improve Medicaid financial management. Additionally, given CMS concerns about resource constraints, prioritizing the planned actions and developing projected implementation schedules is key to ensuring that progress is made toward improving Medicaid financial management. In oral comments, CMS disagreed with our recommendations for strengthening its audit tracking and resolution functions. Regarding our recommendation to standardize the audit tracking reports among CMS regions, CMS stated that although the current format of audit tracking reports is not consistent across regions, the reports provide agency management with sufficient information to ensure that audit findings are resolved in a timely manner. We disagree. As stated in our report, the current reporting formats did not provide CMS with sufficient information to determine whether action had been taken to recover approximately $24 million in questioned costs identified in audit reports more than 2 years ago. Regarding our recommendation to revise its audit tracking reports, CMS stated that the reports are as complete as they can be given the information that they receive from the HHS-OIG. CMS offered a number of reasons for lack of complete data. CMS stated that the HHS-OIG does not consistently provide timely copies of Medicaid audit reports or make audit reports available on-line in a timely manner. Further, CMS said that the reports do not contain the information it needs to enter the report and related findings into the CMS tracking system properly, such as audit findings categorized by type (i.e., questioned cost or management related). HHS/OIG officials acknowledged that they sometimes fail to send some audit reports that CMS is responsible for tracking and resolving but said that they attempt to provide reports promptly when CMS contacts them. In our view, CMS and the HHS-OIG share responsibility in audit resolution. Accordingly, we continue to believe that CMS needs to be proactive in ensuring its tracking mechanisms promptly identify Medicaid findings for resolution and in following up to ensure that actions are taken to prevent Medicaid financial management weaknesses from continuing. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from its date. At that time, we will send copies to the chairmen and ranking minority members of the Senate Committee on Governmental Affairs and House Committee on Government Reform. We are also sending copies of this report to the secretary of health and human services, administrator of CMS, inspector general of HHS, and other interested parties. Copies will also be made available to those who request them. Please contact me or Kimberly Brooks at (202) 512-9508 if you or your staff have any questions about this report or need additional information. W. Ed Brown, Lisa Crye, Carolyn Frye, Chanetta Reed, Vera Seekins, Taya Tasse, and Cynthia Teddleton made key contributions to this report. Strategies to Manage Improper Payments: Learning From Public and Private Sector Organizations (GAO-02-69G, Oct. 2001). Public Assistance: PARIS Project Can Help States Reduce Improper Benefit Payments (GAO-01-935, Sept. 2001). Internal Control Management and Evaluation Tool (GAO-01-1008G, Aug. 2001). Medicaid: State Efforts to Control Improper Payments Vary (GAO-01-662, June 2001). Medicaid in Schools: Improper Payments Demand Improvements in HCFA Oversight (GAO/HEHS/OSI-00-69, Apr. 2000). Standards for Internal Control in the Federal Government (GAO/AIMD- 00-21.3.1, Nov. 1999). Medicaid Enrollment: Amid Declines, State Efforts to Ensure Coverage After Welfare Reform Vary (GAO/HEHS-99-163, Sept. 1999). The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full-text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO E-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily e-mail alert for newly released products” under the GAO Reports heading. Web site: www.gao.gov/fraudnet/fraudnet.htm, E-mail: [email protected], or 1-800-424-5454 or (202) 512-7470 (automated answering system). | The Medicaid program spent more than $200 billion in fiscal year 2000 to meet the health care needs of nearly 34 million poor, elderly, blind, and disabled persons. States are responsible for making proper payments to Medicaid providers, recovering misspent funds, and accurately reporting costs for federal reimbursement. At the federal level, the Centers for Medicare and Medicaid Services (CMS) oversee state financial activities and ensure the propriety of expenditures reported for federal reimbursement. GAO found that weak financial oversight by CMS leaves the program vulnerable to improper payments. The Comptroller General's Standards for Internal Control in the Federal Government requires that agency managers perform risk assessment, take steps to mitigate identified risks, and monitor the effectiveness of those actions. The standards also require that authority and responsibility for internal controls be clearly defined. CMS oversight had weaknesses in each of these areas. As a result, CMS did not know if its control efforts were focused on areas of greatest risk. CMS also was not effectively implementing the controls it had in place. Furthermore, managers had not established performance standards for financial oversight activities, particularly their expenditure review activity. Limited data were collected to assess regional financial analyst performance in overseeing state internal controls and expenditures. In addition, the CMS audit resolution procedures did not collect enough information on the status of audit findings or ensure that audit findings were resolved promptly. CMS' current organizational structure lacks clear lines of authority and responsibility between the regions and headquarters. |
Criminal enterprises generate enormous amounts of cash. To make them easier to conceal and transport, some criminal enterprises convert illicit cash proceeds into monetary instruments, such as traveler’s checks, money orders, or cashier’s checks. To combat this practice, Treasury, in implementing the requirements of BSA, requires financial institutions to report and maintain records of certain financial transactions. These reporting and recordkeeping requirements, which vary by the amount of the financial transaction, are intended to (1) assist law enforcement officials in criminal, tax, or regulatory investigations and proceedings and (2) help law enforcement officials identify suspicious and unusual financial transactions. To further assist law enforcement officials in their efforts to combat money laundering, financial institutions are urged by Treasury and federal financial industry regulators to develop an effective know-your-customer program. Know-your-customer programs are designed to encourage employees of financial institutions to become familiar with the banking practices of their customers so that they can recognize transactions that are outside the normal course of a customer’s business practices and report them as suspicious to the appropriate federal oversight agencies. In implementing BSA requirements, Treasury requires financial institutions to file a currency transaction report for each deposit, withdrawal, exchange of currency, or other payment or transfer by, through, or to financial institutions, that involves more than $10,000 in currency. This requirement includes cashier’s checks. Because concern existed that money launderers were making financial transactions in amounts of $10,000 or less to evade the BSA reporting requirements, Congress in 1988 amended the BSA to require financial institutions to capture, verify, and retain a record of the identity of the purchasers of cashier’s checks and certain other monetary instruments for currency of $3,000 or more. The Secretary of the Treasury also determined that it would be useful to criminal investigators to require banks to retain (1) either the original or a copy of certain checks, including cashier’s checks, exceeding $100 and (2) records prepared or received in the ordinary course of business that would be needed to reconstruct a customer’s deposit account and to trace through the bank’s processing system a check in excess of $100 deposited in an account. Treasury requires that these records be retained for 5 years and be made readily available to the Secretary of the Treasury upon request. In addition, after it had received inquiries from financial institutions about whether suspicious transactions should be reported and what information should be reported, the Department of the Treasury issued Administrative Ruling 88-1 on June 22, 1988. This ruling encouraged but did not require financial institutions to report transactions that might be “...relevant to a possible violation of the BSA or its regulations or indicative of money laundering or tax evasion” to the local office of the Internal Revenue Service’s (IRS) Criminal Investigation Division (CID). Also in 1988, the Comptroller of the Currency Regulation 12 C.F.R. section 21.11 and corresponding regulations issued by the other bank regulatory agencies required financial institutions to report suspected money laundering and/or BSA violations and provide a copy of these reports to the local office of IRS’ CID. A 1992 amendment to BSA prohibits financial institutions from notifying persons involved in suspicious transactions that the transaction had been reported to IRS. Table 1 summarizes the current recordkeeping and reporting requirements for cashier’s checks. In 1990, Treasury developed a regulation to implement the 1988 amendment to BSA that required financial institutions to capture, verify, and retain information on the identity of purchasers of cashier’s checks and other monetary instruments. After considering several alternative recordkeeping requirements, including a requirement that information be kept on copies of monetary instruments and be retrievable by copy, Treasury concluded that maintaining a log of the BSA-required information would be the most effective method of keeping the information. Imposing a specific requirement that financial institutions maintain the BSA-required information on copies of monetary instruments was viewed as too burdensome because, according to Treasury officials, it would require financial institutions to sift through thousands of documents located at various branches to comply with a Treasury request for purchaser information. Treasury also took into consideration that financial institutions keep different kinds of records for each type of monetary instrument and decided that a log would make the BSA information more easily accessible by both the financial institutions and the Treasury Department. Treasury’s August 1990 regulation requiring the log did not specify the form in which the log was to be maintained. In addition, the 1990 regulation allowed for but did not require that a separate log be maintained for each type of monetary instrument. Treasury anticipated that it would request copies of logs by date of issuance rather than by customer name, account number, or type of monetary instrument. Subsequent to the institution of the log requirement, Treasury found that the BSA information that was being logged on the sale or exchange of cashier’s checks for currency was seldom used by law enforcement officials and federal regulators to initiate or conduct money laundering investigations. Compliance with the log requirement was found to impose an expensive and time-consuming burden on the financial industry. As a result, in October 1994, Treasury rescinded the log requirement. Treasury now permits financial institutions to maintain the required BSA information in any format they choose, as long as the information can be readily retrieved at the request of the Secretary of the Treasury. Federal regulators, financial industry officials and advisory groups, and law enforcement officials with whom we spoke or who had expressed their views in published documents agreed that the rescinding of the log requirement that was associated with current BSA recordkeeping requirements and the renewal of emphasis on having financial institutions (1) develop effective know-your-customer programs and (2) report suspicious financial transactions, are sufficient requirements for financial institutions issuing cashier’s checks. In addition, they agreed that imposing additional recordkeeping requirements, such as one that would specifically require financial institutions to retrieve copies of cashier’s checks by customer name or account number, would not add to the effectiveness of the current BSA recordkeeping requirements. Federal regulators, financial industry officials and advisory groups, and law enforcement officials with whom we spoke or who had expressed their views in published documents supported Treasury’s decision to rescind the log requirement for cashier’s checks and other monetary instruments. Reasons cited included the time and effort it took to retrieve the required BSA information on specific purchasers, the limited usefulness of the data retrieved, and the expense associated with maintaining the data. In 1993, Treasury formed a money laundering task force to consider ways to reduce the regulatory burden of complying with BSA while enhancing the utility of the information collected. In 1994, the task force concluded that the BSA information that financial institutions were required to maintain in logs had been infrequently requested and used by law enforcement officials. In addition, the task force and representatives of the financial services industry found that compliance with the log requirement imposed an expensive and time-consuming burden on financial institutions when weighed against more immediate leads in the hands of law enforcement officials, such as reports of suspicious transactions that were being sent directly to IRS. Criminal investigators from IRS and the FBI said that, because of other leads and the ease of utilizing information obtained from direct reporting of suspicious criminal activities, including suspicious-transaction reports, the logged BSA data on the sale or exchange of monetary instruments were used infrequently. They said that the logged BSA information was used on a limited basis, primarily to build a stronger case against a suspect or for further investigation or research. Representatives of financial institutions said that they found the log requirement to be costly and burdensome. To avoid the requirement, some financial institutions prohibited the direct sale of monetary instruments for cash to both deposit and nondeposit customers. Under this policy, customers were required to deposit cash into an account from which a financial institution could then issue a withdrawal to pay for the monetary instrument. Many bankers had indicated their preference for policies prohibiting the sale of monetary instruments for cash because this lessened the possibility of errors and omissions on the logs and eliminated the additional paperwork created by the log requirement. The American Bankers Association estimated in October 1994 that the elimination of the log requirement could save the financial industry about $1 million a year in compliance costs and ease the administrative burden on financial institutions. Federal regulators, financial industry officials and advisory groups, and law enforcement officials with whom we spoke or who had expressed their views in published documents agreed that increased emphasis is currently being placed on developing effective know-your-customer programs and suspicious-transaction reporting, that banks are required to retain copies of certain monetary instruments, and that financial institutions are required to obtain purchaser identifying information. They further agreed that these requirements are sufficient for assisting law enforcement officials in their efforts to detect and further investigate the use of monetary instruments to launder money. Treasury consulted with a BSA advisory group composed of 30 representatives from the financial services industry, trades, businesses, and federal and state governments. Treasury concurred with the BSA advisory group’s conclusion that financial institutions’ resources could be more effectively used to assist law enforcement officials if more emphasis were placed on (1) developing effective know-your-customer programs and (2) reporting suspicious financial transactions to the appropriate regulatory agencies. The American Bankers Association also agreed with this conclusion. Treasury and federal financial regulators have increased their efforts to alert financial institutions to be more aware that the institutions may be misused by criminals who engage in financial transactions to conceal illegal proceeds and avoid federal currency transaction reporting requirements. Financial institutions are being encouraged to become more familiar with the banking practices of their customers—commonly referred to as the know-your-customer program—so that transactions that are outside the norm can be readily identified and reported to appropriate regulatory agencies as suspicious. Treasury expects to issue federal guidelines on developing know-your-customer programs and reporting suspicious transactions in 1995. In the absence of such guidelines, federal bank regulators and financial industry groups have for some time provided guidance to their members either in writing or through seminars that address the importance of know-your-customer programs and suspicious-transaction reporting. These guidelines and seminars provided tips to financial institutions for detecting the use of cashier’s checks and certain other monetary instruments to launder money. Appendix II provides information on guidance provided by the three major bank regulatory agencies and on money laundering seminars held by financial industry groups to inform their members. Law enforcement officials responsible for combating money laundering activities with whom we spoke said that in light of the increased emphasis being placed on the development of know-your-customer programs and the reporting of suspicious transactions, no additional recordkeeping requirements are needed beyond those that are already in place. IRS and FBI criminal investigators said that they support the efforts of federal regulators to encourage financial institutions to place more emphasis on reporting suspicious transactions. These law enforcement officials said that current efforts to promote direct reporting of suspicious transactions would be more beneficial to them than searching through logs of information, because direct reporting would provide a more immediate and direct lead to criminal investigators. They also said that the increased emphasis on developing know-your-customer programs and reporting suspicious transactions, together with the ongoing requirement that financial institutions retain information on purchasers of monetary instruments, should improve law enforcement’s ability to detect and further investigate the use of monetary instruments to launder money. Federal regulators, financial industry and advisory groups, and federal law enforcement officials with whom we spoke or who had expressed their views in published documents agreed that current recordkeeping requirements for cashier’s checks—together with the renewed emphasis being placed on the development of effective know-your-customer programs and suspicious-transaction reporting requirements—are sufficient means for assisting law enforcement officials in their efforts to combat the use of cashier’s checks and certain other monetary instruments to launder money. In June 1995, we requested comments on a draft of this report from the Secretary of the Treasury or his designee, the Commissioner of IRS or her designee, and the American Bankers Association. In written responses, the Director of Treasury’s Financial Crimes Enforcement Network, the IRS Assistant Commissioner of Criminal Investigations, and the Senior Federal Counsel on Government Relations and Retail Banking of the American Bankers Association all agreed with the information presented and the conclusion reached. We are sending copies of this report to the Secretary of the Treasury, the Director of Treasury’s Financial Crimes Enforcement Network, the Commissioner of Internal Revenue, the IRS Assistant Commissioner of Criminal Investigations, the Attorney General, the Chief of the FBI’s Economic Crimes Unit, and other interested parties. We will also make copies available to others upon request. This report was prepared under the direction and guidance of Chas. Michael Johnson, Evaluator-in-Charge. Please contact me at (202) 512-8777 if you have any questions concerning this report. As agreed with the Committees, we limited the scope of our review to (1) identifying current recordkeeping requirements and (2) determining the views of federal government and financial industry officials on the need for additional recordkeeping requirements for financial institutions issuing cashier’s checks. To familiarize ourselves with how cashier’s checks are issued and to identify recordkeeping and reporting requirements imposed on financial institutions issuing cashier’s checks, we reviewed pertinent provisions of the Bank Secrecy Act, relevant federal rules and regulations, and published material such as financial and legal industry reports on BSA compliance. We also interviewed officials from the Department of the Treasury’s Financial Crimes Enforcement Network and IRS’ Criminal Investigation Division (CID) to obtain their views on the level of compliance with these requirements and the need for additional requirements. We obtained the views of the Senior Federal Counsel on Government Relations and Retail Banking of the American Bankers Association on the cost and impact of current, previous, and proposed recordkeeping and reporting requirements for cashier’s checks. We also discussed the use of these logs by law enforcement officials and obtained the American Bankers Association’s views on whether additional recordkeeping and reporting requirements for cashier’s checks are needed. We met with law enforcement officials of IRS’ CID and the FBI’s Economic Crimes Unit to ascertain what problems, if any, they may have with current, previous, and proposed recordkeeping and reporting requirements. We also discussed whether improvements are needed to assist them in their efforts to combat the use of cashier’s checks and other monetary instruments to launder money. We consulted with officials from the Banking and Supervision units of the Federal Reserve Board (FRB), Office of the Comptroller of the Currency (OCC), and Federal Deposit Insurance Corporation (FDIC) in Washington, D.C., to obtain their views on current, previous, and proposed recordkeeping and reporting requirements and to identify efforts undertaken by the banking industry to ensure compliance with BSA and regulatory requirements. We discussed steps taken by these bank regulators to combat the use of cashier’s checks and other monetary instruments to launder money and reviewed relevant agency documents relating to detecting and deterring money laundering. We could not address the extent to which cashier’s checks have been involved in money laundering schemes because no statistical data existed. We did our review in accordance with generally accepted government auditing standards from November 1994 through March 1995 at the Department of the Treasury in Washington, D.C.; at IRS’ CID in Washington, D.C., and Alexandria, VA; at the FBI in Washington, D.C.; and at various financial and regulatory organizations in Washington, D.C. In the absence of standard know-your-customer guidelines from Treasury, federal bank regulators have issued guidance that addresses the importance of developing effective controls to detect and report, among other things, the suspected use of cashier’s checks to launder money. For example, OCC has periodically reissued a pamphlet to national banks entitled Money Laundering: A Banker’s Guide to Avoiding Problems. In a June 1993 update of this pamphlet, OCC reemphasized that know-your-customer policies are a bank’s most effective weapon against being used unwittingly to launder money. The OCC pamphlet stated that knowing your customers includes requiring appropriate identification and being alert to unusual or suspicious transactions, including those involving cashier’s checks or other monetary instruments. The OCC pamphlet also highlighted suspicious activities that bank employees should look for and included a discussion of ways bank customers may attempt to avoid BSA reporting requirements. In March 1991, FDIC provided guidance to state nonmember banks on reporting suspicious transactions. The guidance encouraged these banks to be alert to the possibility that they may be misused by persons who are intentionally attempting to evade the BSA reporting requirements or who are engaging in transactions that may involve money laundering. In January 1995, FRB provided guidance to its member banks outlining the importance of know-your-customer programs and the detection and reporting of suspicious transactions. FRB guidance to its members emphasized that it is imperative that financial institutions adopt know-your-customer guidelines or procedures to ensure the immediate detection and identification of suspicious activity at the institution. FRB’s January 1995 guidance noted that an integral part of an effective know-your-customer policy is to have comprehensive knowledge of the transactions carried out by a customer in order to be able to identify transactions that are inconsistent. In addition, informative publications have been issued and various money laundering conferences and seminars have been held to discuss new developments and changes in the oversight of criminal activities to launder money. These efforts have involved federal regulators, law enforcement and financial industry groups, and trade associations. For example, the American Bankers Association, in conjunction with the American Bar Association’s Criminal Justice Section, periodically holds Money Laundering Enforcement Seminars to highlight Treasury initiatives in the money laundering area. An October 1994 seminar, sponsored by the American Bankers Association and the American Bar Association, addressed a proposal for mandatory suspicious-transaction reporting and the need for banks to develop know-your-customer programs. The seminar also included a discussion on the use of monetary instruments to launder money. The American Bankers Association estimated that it alone had trained 75,000 to 100,000 bankers in the past 8 years through these seminars. The following are some examples highlighted in the guidance provided to financial institutions of activities that might be considered inconsistent with a customer’s normal business activity: an account that shows frequent deposits of large bills for a business that generally does not deal in large amounts of cash; accounts with very large volumes of deposits in cashier’s checks, money orders, and/or wire transfers when the nature of the account holder’s business does not justify such activity; and deposits of numerous checks but rare withdrawals of currency for daily operations. The following are some examples of other customer activities that may trigger suspicious-transaction reports: a reluctance on the part of the customer to produce identification or provide personal background information when opening an account or purchasing monetary instruments above a specified threshold, a customer’s taking back part of the currency to reduce the purchase to below $3,000 after becoming aware of the financial institution’s recordkeeping requirement, and a customer’s coming into the same institution on consecutive or near-consecutive business days to purchase cashier’s checks in amounts of less than $3,000. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a legislative requirement and a congressional request, GAO provided information on: (1) the current recordkeeping requirements for cashier checks; and (2) whether federal government and financial industry officials believe that additional recordkeeping requirements should be imposed on those financial institutions issuing cashier checks. GAO found that the Bank Secrecy Act (BSA) requires financial institutions issuing or exchanging cashier's checks to: (1) file a currency transaction report for financial transactions over $10,000; (2) capture and retain purchaser information for transactions between $3,000 to $10,000; (3) retain copies of cashier's checks for amounts over $100; and (4) maintain a record of certain check transactions exceeding $100. In addition, GAO found that federal government and financial industry officials believe that the current recordkeeping and reporting requirements are sufficient, and imposing additional requirements would not add to the effectiveness of the current BSA recordkeeping requirements. |
Businesses in the United States (including farmers) are generally structured in one of four forms: sole proprietorship, partnership, S corporation, and corporation. Each form of business has distinctive legal characteristics and different tax consequences. Corporations are taxed as corporations, with dividends included in the owner’s income; sole proprietors are taxed as individuals; and the income earned by S corporations and partnerships is passed through to their owners and taxed at the owners’ rates. The Internal Revenue Code (IRC) distinguishes small businesses from larger businesses in a number of ways, and IRS has used different definitions of small business for different internal operating purposes. As part of its current reorganization effort, IRS has developed an agencywide definition of small business, which we used in our work. Based on that definition, small businesses include (1) all farmers and sole proprietorships and (2) partnerships, S corporations, and corporations that annually reported less than $5 million in assets. A large majority of all businesses are small businesses. To illustrate, for tax year 1995, we used IRS data to identify approximately 23.4 million businesses that filed returns. Of this population, 94 percent of partnerships reported total assets of less than $5 million, and 98 percent of S corporations reported total assets of less than $5 million. About 97 percent of U.S. corporations reported assets of less than $5 million in 1995. Sole proprietorships accounted for approximately 16.3 million of the nearly 23.4 million business filers in 1995. To identify the federal filing, reporting, and deposit requirements that apply to small businesses, we examined information such as IRS forms, publications, manuals, and related IRC provisions. We also interviewed IRS officials who were cognizant of small business tax issues, including officials in IRS’ Office of Public Liaison and Small Business Affairs, Compliance Research Division, Appeals Division, and the Small Business/Self-Employed/Supplemental Income Team. To help ensure completeness, we had knowledgeable IRS officials review our listing of IRS requirements. To develop a more comprehensive list of federal tax requirements that apply to small businesses, we contacted the Bureau of Alcohol, Tobacco, and Firearms (ATF) in the Department of the Treasury for information on requirements pertaining to the excise taxes that it administers. The ATF requirements apply generally to any business. To determine the actual experience of small businesses in meeting their filing, reporting, and deposit requirements, including their involvement in IRS’ enforcement processes, we (1) interviewed IRS compliance officials specializing in examination, appeals, and collection issues to ascertain how small businesses are affected by the enforcement processes and (2) obtained some computerized information from IRS databases on the filing and enforcement experience of small businesses. This included data from IRS’ Statistics of Income Division, Audit Information Management System (AIMS), and Accounts Receivable databases. Throughout this review, we drew on our previous work at IRS and on tax administration issues in general. Our work did not address IRS activities related to small businesses that had not filed returns. Also, we experienced several limitations during the course of our work because much of the data we sought were not collected in IRS’ information systems and databases or were not sufficiently reliable. To obtain and analyze the available data, we often had to rely upon sampling, matching, and ad hoc techniques. In addition, data were not available for a single year across all variables, so we had to use data from different years as needed. We did not verify the reliability of the IRS data we used except for some limited checking. Appendix I describes the data limitations we experienced in more detail. We requested comments on a draft of this report from IRS. Their comments are reprinted in appendix IV. Our work was performed in accordance with generally accepted government auditing standards between September 1998 and May 1999 at IRS headquarters in Washington, D.C. Small businesses, like large businesses, are subject to multiple layers of filing, reporting, and deposit requirements that reflect how the business is organized, whether it has employees, and the nature of its business operations. By our count, there are more than 200 requirements—which we grouped into four layers—that may apply to small businesses as well as larger businesses and other taxpayers. The requirements are designed to implement a variety of tax policies. They provide a way not only to collect taxes from businesses, but also to use businesses to collect taxes owed by third parties (e.g., withholding employees’ personal income tax and Social Security and Medicare (FICA) taxes). It is highly unlikely that any business would need to complete all 200 requirements. This is because the forms, schedules, and other requirements that apply to a particular small business reflect how the business is organized, whether it has employees, and the nature of its business operations. Although a few of the requirements must be submitted more frequently than once a year, the vast majority are submitted annually. Appendix II provides a listing of all the requirements that we identified. They reflect the decisions of Congress and the executive branch in keeping with their policy goals and objectives. The requirements with which a small business must comply depend upon how it is organized—sole proprietorship, partnership, S corporation, and corporation. Each business type has its own primary income tax return, some of which include a set of schedules embedded in the form. For example, the primary corporate income tax return, Form 1120, U.S. Corporation Income Tax Return, contains eight embedded schedules. To support their primary income tax return, certain types of businesses and individuals with business income must also attach a mandatory schedule to their return. (See table 1.) For example, sole proprietorships must file Form 1040, U.S. Individual Income Tax Return, and Schedule C, Profit or Loss From Business. As pass-through entities, partnerships and S corporations each have two separate sets of returns–one for the entity and one for its owners. In addition to the primary income tax return filed by the entity, each owner must file a Form 1040 and a Schedule E, Supplemental Income and Loss. A small business’ decision to hire employees adds a second layer of tax requirements. We identified 10 different federal employment tax deposit requirements that potentially apply to small businesses. The number of employment tax filings and deposits depends on the number of employees and the resulting employment tax liability owed at a particular time. (See table 2 and table II.2 in app. II.) For each employee, a small business is generally responsible for collecting and remitting several federal taxes with varying frequency stipulations–withholding for employees’ personal income tax and the employee’s share of FICA, the employer’s share of FICA, and federal unemployment tax (FUTA). A small business employer must report quarterly the amount of personal income tax withheld and FICA taxes paid for employees on Form 941, Employer’s Quarterly Federal Tax Return. The employer must deposit employee income tax and FICA taxes withheld and the employer’s share of FICA taxes by mail or electronically either quarterly, monthly, semiweekly, or the next business day, depending on the employers’ tax liability. If total deposits of withheld income and FICA taxes were more than $50,000 in the second year preceding the current year, the employer must make electronic deposits using the Electronic Federal Tax Payment System (EFTPS). In addition, a small business employer must annually report and quarterly deposit FUTA taxes separately from FICA and withheld income tax. Lastly, an employer must send a federal Form W-2, Wage and Earnings Statement, to each of its employees and file federal Forms W-3, Transmittal of Wage and Tax Statements, and W-2 with the Social Security Administration. In sum, hiring employees—even just one employee—is a critical decision for small businesses in terms of their tax liability and the complexities of the tax administration processes that they face. The decision to offer employee pension, fringe, and welfare benefit plans adds another layer of requirements for a small business. Some benefit plans may substantially increase the number of filing requirements that small businesses face, while others are simplified and entail few, if any, filing requirements. We counted over 10 filing and reporting requirements pertaining to benefit plans, including requirements like the Form 5500 series and related schedules. Certain pension plans are tailored to small businesses and self-employed individuals, offering them a tax-favored way to save for retirement. Simplified Employee Pensions (SEP), Savings Incentive Match Plans for Employees (SIMPLE), and Keogh plans offer small employers and self- employed individuals a deduction for contributions to the plan and deferral of tax on income of the plan. Generally, SEP and SIMPLE plans are less complex than Keogh plans, and while businesses must maintain records about the plans, they do not have any separate filing or reporting requirements with IRS. Keogh plans offer certain benefits not offered by SEP and SIMPLE, but they tend to be more complex and entail substantial filing and reporting requirements with IRS using Form 5500 and related schedules. In addition, most fringe and welfare benefit plans entail filing and reporting requirements with IRS using the Form 5500 series. (For a complete list, see table II.3 in app. II.) The remaining tax requirements that potentially apply to small businesses depend upon the nature of the business activities. A few of these requirements are specific to a type of business, but most are generally applicable to all businesses. For example, there are requirements that pertain to the depreciation of assets, the sale of business property, and claims for a credit to increase research activities. These requirements, of which there are nearly 140, range across income taxes, excise taxes, and information reporting. (For a complete list, see tables II.4, II.5, and II.6 in app. II.) Some of these requirements are used to implement provisions in the Internal Revenue Code that can benefit small (and other) businesses. For example, businesses must complete Form 8861, Welfare-to-Work Credit, to receive a tax credit for hiring long-term family assistance recipients. Also, businesses must complete Form 4562 to claim deductions for depreciation and amortization of business assets or to make the election to immediately expense the cost of certain property. The election to expense property allows the taxpayer to take an immediate deduction instead of using the depreciation schedules to recover a portion of the costs annually over the property’s useful life. (The total cost that may be expensed is $18,500 for 1998.) Among excise taxes alone, we identified about 70 requirements that potentially apply to small businesses. (For a complete list, see tables II.5 and II.6 in app. II.) Generally, though, most small businesses are not responsible for filing excise taxes. According to IRS, fewer than 800,000 small businesses filed excise tax returns in 1997. IRS and ATF administer many of the federal excise taxes. The excise taxes administered by IRS consist of several broad categories, including environmental taxes, communications taxes, fuel taxes, retail sale of heavy trucks and trailers, luxury taxes on passenger cars, and manufacturers’ taxes on a variety of different products. ATF administers excise taxes on the production, sale, or import of guns, tobacco, or alcohol products or the manufacture of equipment for their production. Limitations in IRS’ information systems prevented us from fully determining the extent to which small businesses actually filed various required forms and schedules and which businesses made deposits or the extent of small businesses’ involvement in IRS’ enforcement processes. We were, however, able to obtain and analyze limited data on small businesses’ filing of income tax forms and on some aspects of small businesses’ involvement in IRS’ enforcement processes. The data limitations currently hinder IRS’ ability to effectively manage its activities and serve small businesses and, as IRS has acknowledged, will continue to be a serious impediment until the systems are improved. (For a more detailed discussion of IRS’ data limitations, see app. I.) Although we weren’t able to obtain data on most types of requirements, we were able to obtain information pertaining to small business income tax requirements. Our analysis of 1995 IRS data for approximately 44 forms and 46 related schedules that IRS believes are those most commonly filed showed that small businesses, on average, filed one secondary form in addition to their primary income tax return, with little variation among the different types of business. The most commonly filed secondary income tax form among the 44 was Form 4562, Depreciation and Amortization. Approximately 74 percent of farmers, 62 percent of partnerships, 69 percent of S corporations, and 73 percent of corporations filed the depreciation and amortization form in 1995. The returns for sole proprietorships were lower, with slightly less than 40 percent filing the depreciation and amortization form in 1995. The number of schedules small businesses submitted varied, depending on the type of business. This includes the mandatory schedules filed with the primary return by certain business types and individuals with business income as well as secondary schedules and other schedules embedded in the primary tax return. On average, sole proprietorships and corporations filed approximately three schedules, while farmers filed slightly less than three. Partnerships and S corporations filed more schedules than other types of businesses. Partnerships filed approximately 11 schedules, and S corporations filed approximately 6 schedules, on average, in 1995. The filing results are higher for partnerships and S corporations because of their unique structure as pass-through entities. Partnerships and S corporations must file a Schedule K-1, Partner’s or Shareholder’s Share of Income, with IRS for each partner or shareholder. As a result, Schedule K-1 filings accounted for a significant proportion of the multiple schedules filed by partnerships and S corporations in that year. IRS did have information on federal employment taxes, but it could not be broken out by small businesses. Further, IRS did not have sufficient, reliable data on the number of small businesses that filed pension forms in 1995. We were able to obtain very limited disaggregated data on the number of pension forms filed in 1995. However, the results could not be projected to the larger population of small businesses. We worked with IRS’ Employee Plans/Exempt Organizations Division to obtain a sample of the number of small businesses that filed Form 5500, using an employer identification number match. From a sample of 65,701 small business employer identification numbers, we matched 11,585 that filed Form 5500. The data indicated that 1,090 sole proprietorships, 824 partnerships, and 9,671 corporations filed forms from the Form 5500 series in 1995. IRS had limited data on the extent to which small businesses are involved in both examination and collection activities. We obtained limited data on audit rates; duration; recommendations, such as refunds, no change, or changes recommended by IRS examiners; and appeals and petitions. When IRS has indications that a small business may have failed to meet one or more of the aforementioned requirements, the business can become involved in IRS’ enforcement processes. These processes are basically the same for small businesses as for other taxpayers. They involve examining returns for potential errors or compliance problems, notifying taxpayers of suspected discrepancies, settling disputes over additional taxes recommended, and collecting taxes assessed. (App. III provides a simplified picture of IRS’ audit and dispute resolution process.) IRS’ primary technique for assessing compliance with tax laws is to examine the accuracy of the tax reported on filed tax returns. In selecting returns to be audited, IRS attempts to focus on those it believes are most likely to have compliance problems. IRS data showed that about 2.3 percent of the income tax returns filed by small businesses in 1997 were audited, generally through audits conducted by IRS’ district offices. By contrast, IRS audited 1.3 percent of all returns filed in 1997. The audit rate for sole proprietors (individuals filing Schedule C) was 3.2 percent, compared to 1.2 percent for individuals not filing Schedule C. According to IRS officials, the audit rate for small business taxpayers is higher than the overall rate because small businesses tend to have more compliance problems than other taxpayers. A common kind of problem that small businesses can face is in the area of employment tax compliance. A small business can fall short of operating capital, and as a consequence, it may divert some or all of its estimated tax deposits or employment tax withholdings to make up the shortfall, hoping to pay IRS at a later date. According to IRS officials, the amount of these unpaid taxes, penalties, and interest can pyramid quickly. The danger is that a business that must rely on these funds for working capital is likely to have other liabilities and delinquencies that reflect financial problems so severe that it cannot recover. Table 3 provides detailed information on the audit rates in 1997 for the four types of small businesses and farmers. IRS has little data on the burden imposed by its audits of small businesses and other taxpayers. IRS’ AIMS database does include information on the length of audits from an administrative perspective—that is, the number of days from when IRS’ auditors first begin working on a case until the case is closed. This information, although not a complete indicator of the burden that IRS audits imposed on small businesses, does provide some insight on the time it takes to go through the audit process. During an audit, a taxpayer is likely to spend some time searching for documentation, responding to IRS’ inquiries, and meeting with IRS examiners. However, over the course of the audit, there are also periods of time when the taxpayer is not actively involved, such as while IRS is evaluating records and researching issues, or when IRS examiners are temporarily reassigned to other cases or activities. Table 4 provides data on the average length of small business IRS audits that were closed in 1995. As shown in the table, on average, the audits lasted less than 1 year, and nearly half closed in less than 6 months. Still, some audits—especially those of partnerships—lasted much longer. For example, more than half of the audits of partnerships lasted more than 1 year, as did nearly 30 percent of those for S corporations and corporations. Also, although not shown in the table, 16 percent of partnership audits and from 1 to 3 percent of the audits for other types of small businesses continued beyond 4 years. According to IRS’ Examination Division officials, a variety of factors can lengthen the time it takes to complete a business audit. Examples cited included (1) difficulties in scheduling appointments with taxpayers, (2) the time required for a taxpayer to assemble needed information, and (3) the fact that some audits involve highly complex business tax issues requiring extensive research and investigation. With respect to audits of partnerships in particular, the officials said that the audits tend to take longer because IRS must secure and examine each partner’s return in addition to the partnership return. An audit of a partnership’s return remains open as long as any partner is in disagreement with any audit issue. Also, audits of partnerships often require special procedures and analyses, such as reviewing the linkages between the returns filed by the partnership and each of the partners and checking the application of complex tax laws affecting partnerships in some circumstances. Small business audits often result in recommendations for the assessment of additional taxes and penalties. For the small business audits closed in 1995, 67 percent resulted in a recommended change to the reported tax liability or refundable credits, while about 33 percent resulted in no such changes. Some audits resulting in no change to the reported tax liability did result in changes to other return items deemed significant by IRS examiners. For example, net loss, which can be carried forward and claimed in future years, may have been overstated on the return and adjusted by IRS. Table 5 provides more detailed information on IRS examiners’ recommendations on audits closed in 1995. In considering the information presented, it is important to note that the audit recommendations do not equate to final audit outcomes. For example, recommendations may be partially or fully overturned in IRS appeals or in court decisions. IRS’ 1995 data show that small businesses appealed to IRS or filed court petitions in 8 percent of the audits where it recommended additional tax and penalties. Table 6 provides more detailed information on audits that were appealed and petitioned by each small business category. According to IRS Appeals officials, the lower appeals rates for sole proprietorships and farms may reflect the fact that their returns generally involve less complex tax issues, which leads to fewer potential tax disagreements. Similarly, the officials attribute the much higher appeals rate for partnerships to the complexity of the tax laws affecting partnerships and their returns. IRS’ collection process starts at the point IRS identifies a taxpayer as not having paid the amount of tax due as determined by the tax assessment.First, IRS is to send a notice (or series of notices) to the taxpayers informing them of the amount owed. If the amount is not paid, IRS is authorized to employ enforcement powers to collect what is owed. IRS can refer the delinquency to an automated collection system call site, where an employee calls the taxpayer by telephone and asks for payment. The payment arrangements may include installment agreements or an offer-in-compromise from the taxpayers if the full amount owed cannot be paid. Information about large and chronic tax delinquencies can be referred directly to one of IRS’ 33 district offices, where IRS revenue officers may contact the taxpayer in person. According to IRS officials, small business audits that involve employment taxes are often referred directly to district offices. In addition to liens and levies, IRS collection officials have authority to seize and sell taxpayers’ property, such as cars or real estate. Seizure is generally a last resort to get payment of the amount owed, and the IRS Restructuring and Reform Act now requires a district director’s approval. IRS could not provide information on the number of small businesses undergoing enforced collection actions (i.e., liens, levies, or seizures). However, recent IRS information shows that enforced collections, in general, have declined dramatically since 1997. For example, in fiscal year 1997, IRS made about 10,000 seizures compared to about 2,300 in fiscal year 1998, and fewer than 200 in fiscal year 1999. We requested comments on a draft of this report from the Commissioner of Internal Revenue. On July 23, 1999, we received IRS’ written comments, which discuss IRS’ plans and actions to assist small businesses with their filing and reporting burdens. The comments are reprinted in appendix IV. We also met with IRS officials, including the Deputy National Director of the Public Liaison and Small Business Affairs Office and the Assistant Commissioner for Forms and Submission Processing, to discuss their technical comments, which we incorporated where appropriate. We are sending copies of this report to Senator John Kerry, Ranking Minority Member of your Committee; Senator William V. Roth, Jr., Chairman, and Senator Daniel P. Moynihan, Ranking Minority Member, Senate Committee on Finance, and Representative Bill Archer, Chairman, and Representative Charles B. Rangel, Ranking Minority Member, House Committee on Ways and Means. Copies will also be sent to the Honorable Lawrence H. Summers, Secretary of the Treasury; the Honorable Charles O. Rossotti, Commissioner of Internal Revenue; and the Honorable Jacob Lew, Director, Office of Management and Budget. Copies will be made available to others upon request. If you or your staff have any questions concerning this report, please contact me or Charlie W. Daniel on (202) 512-9110. Key contributors to this report were Robert Floren and Daniel Lynch. In general, the Internal Revenue Service’s (IRS) numerous information systems do not collect or store information by taxpayer groups, such as small businesses. Rather, IRS’ current data systems reflect the agency’s stovepipe structure and transaction-based business approach. Even if IRS’ information systems maintained data by taxpayer groups, obtaining complete account information for a taxpayer would not be easy because IRS’ systems are not linked together. Historically, IRS has operated through functions, such as Examination or Collection, and information about taxpayers tended to be developed to serve each function’s specific needs and its specific interactions with taxpayers rather than IRS’ overall needs or taxpayers’ needs. As a result, IRS’ various and discrete databases provide information pertaining to certain transactions, such as seizures or the filing of income tax returns. The structure of IRS’ information systems does not easily allow for a complete assessment of a small business taxpayer’s interactions—from filing to postfiling—with IRS. IRS maintains information about taxpayers’ filing and compliance histories in masterfile accounts, currently housed in Martinsburg, WV. The majority of the information about taxpayers’ filing and compliance histories is stored in two masterfiles—the individual masterfile and the business masterfile. Neither of these files is coded to distinguish small businesses from other taxpayers. To further complicate matters, data on filings and payments by small businesses may be divided between the individual and business masterfile. Data from Schedule C (sole proprietorship), Schedule E (partnership and S corporation shareholder), and Schedule F (farmer) are posted to the individual masterfile. Data from Schedule 1120 for corporations, including S corporations, are posted to the business masterfile. In addition, all employment and excise tax data are posted to the business masterfile. As a result, certain small businesses may have data on both masterfiles. A similar situation exists with postfiling data, such as that pertaining to examination and collection activities. The data are scattered across numerous information systems and are not coded to distinguish small businesses from other taxpayers. The business masterfile accounts are especially complicated, having multiple reporting requirements and being more difficult for IRS to maintain without error or to use to access data. Also, the information on the masterfiles is not complete because other databases may have other related information. For example, income received about the business from a bank or other payer that has been reported to IRS on an information return is not included in the masterfiles. Further, IRS updates the masterfiles weekly, after the transactions have taken place. Most of IRS’ compliance systems (e.g., Collection) operate off uploads and downloads of selected taxpayer account information on the masterfiles. These systems are used on-line by IRS employees to assist taxpayers or assess their compliance. But the account information on the systems is limited to the intended purpose, and updates are not reflected until the masterfiles are updated on weekends. The limitations in IRS’ information systems prevented us from fully determining the extent to which small businesses actually filed various required forms and schedules and made deposits. The limitations also prevented us from fully determining the extent of small businesses’ involvement in IRS’ enforcement processes. Many of the IRS databases do not allow for a detailed analysis of the information they contain. We were unable to separate out the four types of small businesses in some of the databases. For example, the Form 941 that employers are to use to file their employment taxes does not have information on business assets or gross receipts that would have allowed us to categorize employers by size. Without this information, our alternative was to use information from the business masterfile. Accessing the appropriate tax module in that file might have made it possible to capture information on assets. However, extracting masterfile data is a time- and resource-intensive undertaking that is prone to errors and data reliability problems. It involves requesting IRS’ Information Services to provide an extract from various masterfiles, working with the files to validate them, and then merging the data into one that would be suitable for analysis. IRS receives many internal and external requests for data, and each request must await its turn in the queue. IRS’ resources are limited, and the request could have taken many months for the agency to complete. Thus, we decided not to ask IRS to make the extractions for us. Some of the information we sought was not readily available from IRS’ compliance databases. For example, IRS’ Audit Information Management System (AIMS) database does not identify some small business taxpayers or adequately distinguish small businesses from other businesses. Because AIMS does not include the asset size of partnerships or S corporations, it cannot distinguish between small and other partnerships or S corporations. AIMS does, however, include asset data for sole proprietorships and corporations. IRS’ audits do not include all taxpayer contacts that can result in recommended assessments of additional tax. In particular, notices resulting from IRS’ information matching math-error programs are generally not counted as audits, according to IRS officials. In 1997, IRS’ information matching program generated about 2.8 million notices, resulting in assessments totaling about $1.5 billion. We could not identify any readily available data on the proportion of these assessments directed at small businesses. IRS is taking interim steps to address some of its data problems. IRS expects to develop a way of linking a customer identifier to the case information in 35 of its most important information systems. When this “workaround” is complete, IRS managers and frontline workers should know whether the return or information report data they might be using are for a small sole proprietor, partnership, S corporation, or corporation. However, the interim solution will not provide real-time information about the full range of transactions currently ongoing for a particular taxpayer. Neither will the interim solution link IRS’ systems to provide comprehensive information about taxpayers’ interactions with the agency. IRS has acknowledged that its systems limitations hinder its ability to effectively manage its activities and serve small businesses and plans to continue making information systems improvements as part of its ongoing modernization and restructuring efforts. Form 1040, U.S. Individual Income Tax Return Form 1065, U.S. Partnership Return of Income Schedule F, Profit or Loss from Farming Schedule J, Farm Income Averaging Form 990C, Farmers' Cooperative Association Income Tax Return Form 1040 ES, Estimated Tax for Individuals Form 2210F, Underpayment of Estimated Tax by Farmers and Fisherman Form 1065, U.S. Partnership Return (information return) If a partner in a partnership (even if no income received) to report share of partnership income or loss See explanation above If elected to be an S corporation to report income, gains, losses, etc. (continued) Tax formSchedule K-1 (form 1120S), Shareholder's Share of Income, Credits, Deductions Estimated Tax, Form 8109, Tax Deposit Coupon or EFTPS Form 1040, U.S. Individual Income Tax Return Schedule E, Supplemental Income and Loss (part II) If business has employees (must file on magnetic media if 250 or more Form W-2s) If business sold or exchanged capital assets If filing as an individual, estate, or trust and paid certain foreign taxes to a foreign country or U.S. possession If claiming investment in building rehabilitation, alternative energy, or reforestation If more than one type business credit claimed If required to refigure investment credit (e.g., when investment credit property sold) If depreciating, amortizing, or expensing certain business property If deducting losses due to fire, storm, theft, or other casualty If sold or exchanged business property If filing as an individual, estate, or trust and claiming deduction for investment interest expense If claiming the work opportunity credit for wages paid to targeted groups of employees If incurred loss from specified "at risk activities" (e.g., farming, exploring for oil, others) If tax on alternative minimum tax income is greater than tax reported on form 1040 If reporting income from casual sales (other than inventory) where payments received in a tax year after the year of sale If claiming the credit for alcohol used as fuel If claiming the credit for increasing research activities If claiming gains or losses from (1) section 1256 contracts under the marked-to-market rules (such as regulated futures contracts) or (2) straddles (offsetting positions that decrease the risk of loss) If disclosing items that are otherwise not adequately disclosed for the purpose of avoiding penalties If disclosing positions taken on a tax return that are contrary to Treasury regulations If a shareholder in a Interest Charge Domestic International Sales Corporation and receiving deferred DISC income that increases taxable income If reporting a net loss from "passive activities" (e.g., most real estate investments) If an owner of certain low-income housing projects and claiming credit If bought or sold a trade or business and goodwill or going-concern value attaches or could attach to assets (continued) If business operates as a broker or barter exchange, it must report proceeds from transactions (continued) If made a loan that is a certified indebtedness amount on any mortgage credit certificate If a casino in the U.S. with annual gross gaming revenues in excess of $1 million, to report currency transactions of $10,000 or more (continued) Schedule H (form 1120), Section 280H Limitations for a Personal Service Corporation Schedule PH (form 1120), U.S. Personal Holding Company Tax Form 1118, Foreign Tax Credit (form 1120); (attached to form 1118 are Schedule I, Reduction of Oil and Gas Extraction Taxes and Schedule J, Separate Limitation Loss Allocations) Form 2438, Undistributed Capital Gains (attach to form 1120-RIC or 1120-REIT) If a personal holding company If electing to claim the foreign tax credit; separate forms 1118 must be filed for each of nine limitation categories that apply If a Regulated Investment Company or Real Estate Investment Trust and had undistributed capital gains (continued) TitleTax Information Authorization Excise Tax Return Excise Tax Return–Alcohol and Tobacco (Puerto Rico) Excise Tax Return–Alcohol and Tobacco (Puerto Rico) Specific Transportation Bond-Distilled Spirits or Wines Withdrawn for Transportation to Manufacturing Bonded Warehouse–Class Six Specific Export Bond–Distilled Spirits or Wine Application and Permit to Ship Puerto Rico Spirits to the United States Without Payment of Tax Certification of Tax Determination–Wine Drawback on Wine Exported Drawback on Beer Exported Beer for Exportation Application and Permit to Ship Liquors and Articles of Puerto Rico Manufacture Tax Paid to the United States Bond, Drawback of Tax on Tobacco Products, Cigarette Papers,or Tubes Monthly Report – Manufacturer of Tobacco Products Computation of Tax Agreement to Pay Tax on Puerto Rican Cigars or Cigarettes Inventory–Manufacturer of Tobacco Products Federal Firearms and Ammunition Excise Tax Deposit Claim for Drawback of Tax on Tobacco Products, Cigarette Papers, or Cigarette Tubes Claim–Alcohol, Tobacco, and Firearms Taxes Special Tax Registration and Return (Alcohol and Tobacco) Application for Tax Paid transfer and Registration of a Firearm Application for Tax-Exempt Transfer and Registration of a Firearm Special Occupational Tax Printing Request Continuing Export Bond–Distilled Spirits and Wine Continuing Transportation Bond Distilled Spirits Or Wines Withdrawn for Transportation to Manufacturing Bonded Warehouse –Class Six Drawback Bond–Distilled Spirits and Wine Tax Deferral Bond–Beer (Puerto Rico) Certification of Prepayment of Tax on Puerto Rico Cigars, Cigarettes, Cigarette Papers, or Cigarette Tubes Report of Multiple Sales or Other Disposition of Pistols and Revolvers Firearms Transaction Record Part I–Over-the-Counter Firearms Transaction Record Part I–Low Volume–Over-the-Counter Firearms Transaction Record Part I–Intra-State Over-the-Counter (English-Spanish) Firearms Transaction Record Part II–Non-Over-the-Counter Firearms Transaction Record Part II–Low Volumne–Intra-State Non-Over-the-Counter Floor Stocks Tax Return 1993 Floor Stocks Tax Return (Cigarettes) Certificate of Taxpaid Alcohol Drawback on Distilled Spirits Exported Tax Deferred Bond–Distilled Spirits Floor Stocks Tax Return–Pipe Tobacco Federal Firearms and Ammunition Excise Tax Return Application for Registration for Tax Free Transactions Under 26 USC 4221 (Firearms and Ammunition) Statement of Adjustment to the Puerto Rico or Virgin Islands Tax Account (continued) TitleTax Collection Waiver Certification of Ultimate Vendor for Use in Tax Refund Claim Under Section 6416 (b) (2) of the Internal Revenue Code (27 CFR 53.179 (b) (iii)) Purchaser’s Certificate of Tax-Free Purchase for Use as Supplies for Vessels and Aircraft (27 CFR 53.134 (d) (2)) Purchaser’s Certificate of Tax-Free Purchase for State or Local Government Use (27 CFR 53.135 (c) (1)) Vendor’s Certificate of Tax-Free Purchase for Resale for Export (27 CFR 53.133 (d) (2)) Vendor’s Certificate of Tax-Free Purchase for Resale for Further Manufacture (27 CFR 53.132 (c) (2)) Application for Extension of Time for Payment of Excise Tax Consent to Extend the Time to Assess ATF Excise Tax Special Tax “Renewal” Registration and Return Special Tax Location Registration Listing Special Tax Stamp Special Tax Registration and Return National Firearms Act (NFA) Special Occupational Tax Inquiry Letter IRC Guideline/Worksheet for Late Excise Payment/Deposit or Tax Return Bond for Spirits or Distilled Spirits or Rum Brought Into the US Free of Tax (used by Virgin Islands) Bond for Articles Brought Into the U.S. Free of Tax (used by Virgin Islands) This appendix illustrates a simplified process for auditing tax returns, resolving disputed taxes, and collecting taxes owed. For the small percentage of returns that are audited, most tax issues are resolved during the audit process. However, some audited taxpayers dispute their additional taxes to Appeals, and a few seek to resolve their disputes with IRS in the courts. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch-tone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO provided information on small business tax requirements, focusing on: (1) the federal filing, reporting, and deposit requirements that apply to small businesses; and (2) the actual experience of small businesses in meeting these requirements, including their involvement in the Internal Revenue Service's (IRS) enforcement process. GAO noted that: (1) small businesses, like large businesses, are subject to multiple layers of filing, reporting, and deposit requirements; (2) GAO identified more than 200 different IRS requirements that potentially apply to small businesses; (3) through such requirements, IRS administers a variety of tax policies--notably those associated with income, employment, and excise taxes; (4) in considering the implications of the number of requirements, it is important to recognize that the requirements reflect the many decisions that have been made by Congress and the executive branch to accomplish their policy goals, including those that might benefit small businesses and other taxpayers; (5) it is equally important to recognize that most businesses do not need to comply with all or even most of these requirements; (6) the ones that apply to a particular small business would depend on how the business is organized, whether it has employees, and the nature of its business operations; (7) limitations on IRS' information systems prevented GAO from fully determining the extent to which small businesses filed the various forms and schedules or their involvement in key stages of IRS' enforcement processes; (8) IRS has acknowledged that these limitations hinder its ability to effectively manage small business activities and will continue to be a serious impediment until the systems are improved; (9) GAO was able to obtain and analyze limited data on small business filings of income tax forms and on some aspects of their involvement in IRS' enforcement processes; (10) GAO's analysis of IRS' 1995 data on the most commonly filed income tax forms and schedules showed that small businesses, on average, filed one secondary form in addition to their primary income tax return, with little variation among types of businesses; and (11) GAO's analysis of small business audits showed that the audit rate for small businesses is higher than the rate for all taxpayers and that about two-thirds of the audits of small businesses result in recommendations for assessment of additional taxes and penalties. |
In 1990, we first designated DOE program and contract management as an area at high risk of fraud, waste, abuse, and mismanagement. In January 2009, to recognize the progress made at DOE’s Office of Science, we narrowed the focus of the high-risk designation to two DOE program elements, NNSA and the Office of Environmental Management. In February 2013, our most recent high-risk update, we narrowed this focus to major projects (i.e., projects over $750 million) at NNSA and the Office of Environmental Management. DOE has taken some steps to address our concerns, including developing an order in 2010 (Order 413.3B) that defines DOE’s project management principles and process for executing a capital asset construction project, which can include building or demolishing facilities or constructing remediation systems.NNSA is required by DOE to manage the UPF construction project in accordance with this order. The project management process defined in Order 413.3B requires DOE projects to go through five management reviews and approvals, called “critical decisions” (CD), as they move forward from project planning and design to construction to operation. The CDs are as follows: CD 0: Approve a mission-related need. CD 1: Approve an approach to meet a mission need and a preliminary cost estimate. CD 2: Approve the project’s cost, schedule and scope targets. CD 3: Approve the start of construction. CD 4: Approve the start of operations. In August 2007, the Deputy Secretary of Energy originally approved CD 1 for the UPF with a cost range of $1.4 to $3.5 billion. In June 2012, the Deputy Secretary of Energy reaffirmed CD 1 for the UPF with a cost range of $4.2 to $6.5 billion and approved a phased approach to the project, which deferred significant portions of the facility’s original scope. According to NNSA documents, this deferral was due, in part, to the project’s multibillion dollar increased cost estimate and to accelerate the completion of the highest priority scope. In July 2013, NNSA decided to Table 1 shows the combine CD 2 and CD 3 for the first phase of UPF.UPF’s phases, scope of work, cost estimate, as of June 2012, and proposed start of operations. However, the future status of the UPF project and the process by which enriched uranium operations at the Y-12 plant will be modernized are unclear. NNSA has recently decided to: (1) delay later UPF phases, (2) assess options other than the UPF for enriched uranium operations at the Y-12 plant, (3) change a key technological requirement for the UPF, and (4) develop a strategy for how NNSA will maintain the Y-12 plant’s uranium capabilities into the future. Specifically: NNSA is delaying UPF Phase II and Phase III a minimum of 2 years. In a December 2013 testimony before the Defense Nuclear Facilities Safety Board, the NNSA Acting Administrator said that the agency does not expect to move out the facilities that house machining operations (Building 9215) as well as assembly and dismantlement operations (Building 9204-2E) until 2038 due primarily to budget constraints. NNSA previously estimated that these capabilities would be operational in the UPF no later than 2036. NNSA is currently evaluating alternatives to the UPF. In early January 2014, NNSA began to consider options other than the UPF for enriched uranium operations at the Y-12 plant because, according to the UPF Federal Project Director, the project is facing budget constraints, rising costs, and competition from other high-priority projects within NNSA—such as the planned B61 bomb and W78/88 warhead nuclear weapon life extension projects. NNSA has initiated a formal analysis of UPF alternatives—an analytical comparison of the operational effectiveness, costs, and suitability of proposed solutions to address a mission need. According to the UPF Federal Project Director, the analysis of alternatives, which is scheduled to be completed by April 15, 2014, will include a potential solution for replacing only uranium purification and casting capabilities (Building 9212) by July 2025 at a cost that does not exceed $6.5 billion. According to the NNSA Acting Administrator, NNSA does not plan to continue full operations in Building 9212, which has been operational for over 60 years, past 2025 because the building does not meet modern safety standards, and increasing equipment failure rates present challenges to meeting required production targets. While NNSA is undertaking the analysis of alternatives, the UPF project team is currently (1) delaying the start of approximately $300 million in site preparation and long lead procurement activities and (2) will no longer complete engineering work to have the UPF’s design reach the 90 percent complete milestone by August 2014, as previously planned. NNSA is currently evaluating alternatives for a key uranium purification technology originally planned for UPF Phase I, which now may be part of UPF Phase II. In late January 2014, NNSA decided to consider alternatives from its baseline uranium purification technology—which was to be part of UPF Phase I scope and had been under development since 2005—to a new technology, according to the UPF Federal Project Director. NNSA believes the new technology will require less space in the UPF and be more efficient to operate. In early February 2014, NNSA directed the UPF contractor to suspend design efforts in two UPF processing areas impacted by the potential technology change. Furthermore, NNSA is now considering installing this new technology as part of UPF Phase II scope, pending the results of further analysis. NNSA is currently developing a Uranium Infrastructure Strategy for the Y-12 plant. In early February 2014, the NNSA Deputy Administrator for Defense Programs directed his staff to develop a Uranium Infrastructure Strategy, which establishes the framework of how NNSA will maintain the Y-12 plant’s uranium mission capabilities into the future. Key aspects that are to be considered during the strategy’s development include, among other things: (1) an evaluation of the uranium purification capabilities and the throughput needed to support requirements for life extension programs and nuclear fuel for the U.S. Navy; (2) an evaluation of the alternatives to the UPF that prioritizes replacement capabilities by risk to nuclear safety, security, and mission continuity; (3) an identification of existing infrastructure as a bridging strategy until replacement capability is available in new infrastructure. A draft of the strategy is due to the Deputy Administrator by early April 2014. To assess the maturity of new technologies, DOE and NNSA adopted the use of Technology Readiness Levels (TRL). DOE took this action in response to our March 2007 report that recommended that DOE develop a consistent approach to assess the extent to which new technologies have been demonstrated to work as intended in a project before starting construction.which is the least mature; through TRL 4, in which the technology is validated in a laboratory environment; to TRL 9—the highest maturity level, where the technology as a total system is fully developed, integrated, and functioning successfully in project operations. As shown in table 2, TRLs start with TRL 1, we found that DOE’s TRL guidance was not always consistent with best practices followed by other federal agencies, as well as with our prior recommendations. Specifically, DOE’s TRL guidance recommended that new technologies reach TRL 6—the level where a prototype is demonstrated in a relevant or simulated environment and partially integrated into the system—at the start of construction (CD 3). However, best practices followed by other federal agencies and our prior recommendations state that new technologies should reach TRL 7— the level where a prototype is demonstrated in an operational environment, has been integrated with other key supporting subsystems, and is expected to have only minor design changes—at the start of construction. In our November 2010 report, we recommended that the Secretary of Energy evaluate where DOE’s guidance for gauging the maturity of new technologies is inconsistent with best practices and, as appropriate, revise the guidance to be consistent with federal agency best practices. In September 2011, DOE issued its revised TRL guidance, but the guidance does not incorporate federal agency best practices and is not fully responsive to our recommendation. Specifically, DOE’s revised TRL guidance continues to recommend that new technologies reach TRL 6 at the start of construction, while stating that reaching TRL 7 at the start of construction is a recognized best practice. In November 2010, we reported that NNSA was developing 10 advanced uranium processing and nuclear weapon components production technologies for the UPF. Since that time, NNSA eliminated 1 technology as the agency removed certain operations from the UPF. In April 2013, NNSA chartered an independent peer review team to examine various aspects of the UPF project, including assessing the current TRLs for new technologies. In an August 2013 report, the independent peer review team found that 6 of the 9 new technologies were not as mature as previously reported in the UPF contractor’s May 2013 TRL assessment; the independent peer review report also stated that no fundamental technology show stoppers were identified. In addition, the independent peer review report contained multiple technology development related findings and recommendations, and NNSA and the UPF contractor developed a corrective action plan to address them. Table 3 provides a description of the new technologies, the phase in which each technology will be deployed, and the TRL assessment concluded by the UPF contractor in May 2013 and the UPF independent peer review in August 2013. Since our November 2010 report, we identified five additional risks associated with using new technologies in the UPF. Specifically: Integration risks for microwave casting technology. The August 2013 independent peer review team report raised concerns with microwave casting—a process that uses microwave energy to melt and cast uranium metal into various shapes—as it is planned to be integrated in the UPF’s casting system.official, the casting system planned for the UPF will employ uranium processing technology, equipment, and steps that are substantially different than the casting system currently used at the Y-12 plant. For example, microwave casting in the UPF will use glovebox enclosures—a containment system of secured gloves attached to a box that allows workers to process nuclear material inside the box without risk of contamination. The independent review team found that the UPF’s planned casting glovebox enclosures are: (1) large and somewhat complex, (2) have not had their functionality tested, and (3) have not been demonstrated with microwave casting. As such, the independent review team concluded that microwave casting has not been demonstrated in a relevant environment—a key requirement to reach TRL 6. Technology development risk for special casting technology. In 2012, the UPF project team determined that a different nuclear safety control was needed for special casting—a custom process for casting uranium metal into various shapes—as this process uses more uranium than the regular casting process. For special casting, the UPF contractor is developing a nuclear safety control called “entombment.” The entombment control—which uses multiple parts, such as cylinders and an insulation material—would occupy void volume where molten uranium metal could otherwise collect in the event of an improper casting (i.e., mis-pour). However, a key insulation material planned for use in the entombment control failed an important series of performance tests in fiscal year 2013. According to UPF contractor representatives, this failure and the need to identify and test an alternative insulation material is now the project’s most significant technology development risk and the primary reason why the special casting technology is currently assessed to be at TRL 3. Transition risks if NNSA switches to a new uranium purification technology. In 2005, NNSA decided to deploy the saltless direct oxide reduction technology—a process that converts uranium dioxide into a useable metal form—into the UPF. The August 2013 UPF independent peer review report concluded that an alternate technology called direct electrolytic reduction and electrolytic refining (DER/ER; currently assessed to be at TRL 3 and 4, respectively) could potentially reduce the UPF’s operating costs and produce less radioactive waste compared with the saltless direct oxide reduction technology. In a December 2013 testimony before the Defense Nuclear Facilities Safety Board, the Acting NNSA Administrator stated that early research and development investments in the DER/ER technology are promising and NNSA is actively seeking to mature and deploy the technology into UPF to minimize future waste streams. NNSA is now considering installing this new technology as part of UPF Phase II scope, pending the results of further analysis, and directed the UPF contractor in February 2014 to suspend design efforts in two UPF processing areas impacted by this potential technology change. UPF contractor representatives told us that incorporating DER/ER into the UPF would require significant changes to the facility’s design. For example, NNSA officials said that changing to DER/ER would require a complete redesign of the processing areas and their equipment and may require adding new support utilities and that the UPF facility would have to revise its nuclear safety analysis. In addition, the August 2013 UPF independent peer review report found that the UPF project team has not conducted any nuclear criticality studies or developed any nuclear safety controls for the DER/ER technology because the technology was not planned for use in UPF at the time. Assurance risks that the agile machining technology will work as intended before making key project decisions. As stated earlier, NNSA plans to approve a combined CD 2 (approved cost, schedule, and scope targets) and CD 3 (approve start of construction) milestone in for UPF Phase I, which includes the building exterior, all UPF processing areas, and all UPF support systems. In short, UPF Phase I will create key parameters that subsequent UPF phases must work within. Agile machining—a system combining multiple machining operations into a single process that fabricates metal into various shapes—is the key technology planned for UPF Phase II. The August 2013 independent UPF peer review assessed agile machining to be at TRL 4. In December 2013, NNSA decided to no longer fund agile machining technology development efforts because (1) agile machining is not considered a baseline technology for UPF Phase I as it is part of the deferred scope and (2) NNSA is considering combining agile machining development efforts with other machining development efforts used in at the Y-12 plant. According to NNSA officials, the UPF contractor has completed the design for the agile machining prototype, but it will be the responsibility of the UPF Phase II project to mature this technology. GAO-07-336. which requires the successful deployment of all new technologies planned for the UPF, including those scheduled for UPF Phase II. Risk that funding mechanism for UPF technology development activities may not be adequate to develop all new technologies. According to NNSA officials and UPF contractor representatives, NNSA has primarily funded UPF technology development activities from the Y-12 plant-directed research and development (PDRD) program, which requires projects from every part of the Y-12 plant to compete for funding. From fiscal year 2005 to fiscal year 2013, the $73 million in UPF technology development costs have been funded by non-UPF project sources, such as the PDRD program, instead of from the $1 billion specifically allotted to the UPF project, according to the UPF contractor Project Manager. UPF contractor representatives told us that NNSA made the decision to fund technology development activities from non-UPF project sources because some UPF technologies could be used for other operations at the Y-12 plant or at other nuclear weapon stockpile programs. However, NNSA did not select some UPF technology development projects identified as being priority by the UPF project team for PDRD funding. For example, in fiscal year 2013, 19 projects were considered priority by the UPF project team, but NNSA did not fund 3 of these projects. For fiscal year 2014, 19 projects were considered priority by the UPF project team, but NNSA did not fund 7 of these projects. In addition, one of the five UPF technology development risk reduction plans developed by the UPF contractor is to “obtain additional PDRD funding,” but the effectiveness of this plan is unclear given that existing UPF technology development projects considered priority have not received funding. NNSA is currently taking some actions to address three of the five UPF technology risks we have identified. NNSA is currently developing plans and making programmatic decisions about the UPF that could address the other two risks but, it is still too soon to determine if these actions will sufficiently address these two risks. Specifically: Microwave casting technology. To address the risks with integrating microwave casting—a process that uses microwave energy to melt and cast uranium metal into various shapes—into the UPF’s casting system, the UPF contractor plans to issue a request for proposal in March 2014 for the development of a prototype microwave casting furnace. UPF contractor representatives said that they have had preliminary discussions with two vendors about building the prototype. According to NNSA officials, the UPF contractor will be required to test the prototype in an integrated UPF configuration that includes glovebox enclosures. If completed, we believe these planned actions will help NNSA reduce this risk and identify further mitigation measures that may need to be taken. Special casting technology. For the entombment nuclear criticality safety control planned for special casting—a custom process for casting uranium metal into various shapes—UPF contractor representatives told us that they would like to have a replacement insulation material identified and successfully tested by June 2014. However, NNSA officials said that this June 2014 date is optimistic and may not be met. In addition, UPF contractor officials said they are currently conducting a formal analysis of alternatives for the entombment control. The UPF contractor expects to finish its analysis by the end of January 2014 and brief the results and potential impacts to senior NNSA management. If NNSA completes these planned actions, we believe that the agency may have taken appropriate actions to address this risk. DER/ER technology. According to the UPF Federal Project Director, in February 2014, the NNSA Deputy Administrator for Defense Programs directed NNSA’s Production Office—the field office responsible for contractor oversight and management of the Y-12 plant—to: (1) create a technology development plan for DER/ER, (2) develop a preliminary cost estimate for DER/ER technology development, (3) identify existing facilities at the Y-12 plant where DER/ER could be deployed, and (4) use DER/ER in actual uranium operations no later than 2021. However, as noted above, the design of the two UPF processing areas impacted by the potential switch to DER/ER has been suspended. According to the UPF Federal Project Director, these areas are considered part of UPF’s deferred scope which means that (1) the UPF will reserve space for DER/ER capabilities and ensure that all needed support utilities are available in the two processing areas and (2) it will be the responsibility for the UPF Phase II project team to install the DER/ER equipment into the UPF. Given the early stages of NNSA’s planning, we believe that it is too soon to determine if these actions will address this technology transition risk. Agile machining technology. The UPF is to modernize and consolidate all enriched uranium operations at the Y-12 plant in three phases. Agile machining—a system that combines multiple machining operations for fabricating metal into various shapes into a single process—is the key technology planned for UPF Phase II. As stated above, NNSA decided in December 2013 to no longer fund agile machining technology development efforts (currently assessed at TRL 4) for multiple reasons. UPF contractor representatives told us that (1) they have completed the design of an agile machining prototype; (2) by the end of 2014, they will outline the actions needed to mature the technology to TRL 6; and (3) the UPF Phase II project team will have the responsibility to mature the technology. NNSA is currently evaluating alternatives to the UPF, and the outcome of this evaluation may require different agency actions to address this risk. If NNSA decides to continue with a UPF that includes machining operations, the agency will need to take action to address a recommendation from the August 2013 UPF independent peer review team. Specifically, the peer review team recommended that the UPF project fabricate and test an agile machining prototype before starting construction on the UPF. According to the peer review team, implementing this recommendation is a high-priority action and will help ensure the confident integration of the agile machining technology into UPF at a later date. However, if NNSA decides to construct a facility with only uranium purification and casting capabilities—which do not include machining capabilities—the agency will have to develop alternate plans that detail how machining capabilities will be modernized and how the agile machining technology will be matured. It is too soon to determine if the Uranium Infrastructure Strategy that NNSA is currently developing and scheduled to issue in draft form in early April 2014 will address this issue. Given that NNSA is: (1) currently evaluating potential UPF alternatives; (2) currently developing its Uranium Infrastructure Strategy; and (3) planning to conduct machining operations in its current facility, until at least 2038, we believe it is too early to tell if NNSA is taking appropriate action to address this risk. Technology development funding. The August 2013 UPF independent peer review report recommended that the UPF project fund more technology development activities from project funds instead of PDRD funds. In October 2013, the UPF contractor issued a corrective action plan to address the recommendations from the August 2013 UPF independent peer review. This corrective action plan: (1) lists planned and ongoing actions that the contractor and NNSA will take to address each recommendation, (2) provides an estimated date by which planned actions are to be completed, and (3) identifies which UPF contractor representative or NNSA official is responsible for completing the planned action. As part of this corrective action plan, the UPF Assistant Project Manager for Technology is responsible for determining which technology development activities should be funded directly with UPF project funds and is to prepare a cost estimate for those activities. The UPF Assistant Project Manager for Technology told us that he expects to complete the planned corrective action by the end of March 2014. Adding required technology development activities to the UPF’s cost estimate may increase the project’s cost estimate, but it may also improve the accuracy and comprehensiveness of the estimate. If NNSA completes these planned actions, we believe that the agency may have taken appropriate actions to address this risk. Enriched uranium operations at the Y-12 plant play a vital role in the national security of the United States by producing critical components for the nuclear weapons stockpile and by supplying fuel for the Navy. There is a clear need for NNSA to replace the old, deteriorating, and high- maintenance facilities at the Y-12 plant. NNSA is currently reevaluating the UPF project and may decide to construct a facility that is smaller and contains only select enriched uranium processing capabilities. Whether NNSA continues with the UPF project or chooses to undertake a smaller project, the facility will likely cost billions of dollars, and its ability to meet critical national security needs will depend on the successful development and deployment of new technologies. It is encouraging that NNSA has taken some steps to manage the development of these technologies. However, as we have detailed in this and other reports, we are concerned that, nearly a decade after the project started, the UPF project continues to face key technology-related risks, including the potential transition risks associated with NNSA’s recent decision to consider alternatives to a new uranium purification technology. NNSA is in the process of making key programmatic decisions with the UPF and has some ongoing efforts that may address identified risks if fully and successfully implemented. We will continue to monitor NNSA’s progress in addressing these risks as part of our UPF critical decisions reviews as directed by the Fiscal Year 2013 National Defense Authorization Act. In addition, NNSA has not taken action to address the two recommendations related to UPF technology development we made in our November 2010 report: (1) that NNSA ensure that new technologies reach the level of maturity called for by best practices prior to CDs being made on the UPF project and (2) that the agency report to Congress any decisions to approve cost and schedule performance baselines or to begin construction of UPF without first having ensured that project technologies are sufficiently mature. NNSA generally agreed with those recommendations, and we continue to believe that best practices followed by other federal agencies for managing technology development, particularly reaching TRL 7 at the start of construction, are important. By not fully incorporating these practices, NNSA may not be able to ensure that the UPF and its other projects can be completed on time and within budgets. We are not making any new recommendations in this report. We provided a draft of this report to NNSA for comment. In written comments (see Appendix I), the acting NNSA Administrator stated that NNSA’s existing TRL guidance, which encourages the achievement of TRL 7 prior to the start of construction (CD 3), provides adequate protection against the premature commitment of resources while giving projects flexibility to make decisions between seeking TRL 6 and TRL 7. In addition, the acting NNSA Administrator stated the agency is closely overseeing and managing UPF’s ongoing technology development efforts using on-site personnel and independent reviews to confirm reported progress while also keeping senior management informed of development efforts. We continue to believe that DOE should fully adhere to best practices in its technology development activities. DOE’s guidance recognizes that achieving a TRL 7 at the start of construction is a best practice and DOE encourages—but does not require—projects to achieve this level of readiness. Achieving TRL 7—the level where a prototype is demonstrated in an operational environment, has been integrated with other key supporting subsystems, and is expected to only have minor design changes—does require more time, effort, and money than achieving TRL 6. However, TRL 7 is seen as a best practice because it provides greater assurance the new technologies will work as intended before making very significant resource investments in construction activities, which for the UPF will total billions of dollars. Notably, DOD follows best practices in this area and recommends projects reach TRL 7 before production and deployment, or the equivalent of beginning construction on a DOE project. Regarding NNSA’s oversight of technology development efforts, it is clear that certain NNSA actions, such as the August 2013 UPF independent peer review, helped identify and respond to some technology development issues. However, since NNSA is currently considering alternatives to UPF and developing the Uranium Infrastructure Strategy for the Y-12 plant, it is too early to determine if the oversight and management actions cited by NNSA will be sufficient to fully address all the risks we identified. NNSA also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Energy, the Administrator of NNSA, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions on matters discussed in this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. NNSA’s attachment provided technical comments which we incorporated as appropriate in the final report. David C. Trimble, (202) 512-3841 or [email protected]. In addition to the individual named above, Jonathan Gill, Assistant Director; Patrick Bernard; Antoinette Capaccio; Will Horton; Katrina Pekar-Carpenter; Dr. Timothy Persons, and Ron Schwenn made key contributions to this report. | NNSA conducts enriched uranium activities—including producing components for nuclear warheads and processing nuclear fuel for the U.S. Navy—at the Y-12 National Security Complex in Tennessee. NNSA has identified key shortcomings in the Y-12 plant's current uranium operations, including rising costs due to the facility's age. In 2004, NNSA decided to build a more modern facility—the UPF—which will use nine new technologies that may make enriched uranium activities safer and more efficient. In November 2010, GAO reported on the UPF and identified risks associated with the use of new technologies ( GAO-11-103 ). The Fiscal Year 2013 National Defense Authorization Act mandated that GAO assess the UPF quarterly. This is the third report, and it assesses (1) additional technology risks, if any, since GAO's November 2010 report and (2) NNSA's actions to address any risks. GAO reviewed NNSA and contractor documents and interviewed NNSA officials and contractor personnel. GAO is not making any new recommendations. However, NNSA should continue actions to address the two recommendations—which NNSA generally agreed with—in GAO's November 2010 report related to ensuring that technologies reach optimal levels of maturity prior to critical project decisions. In commenting on a draft of this report, NNSA said its current technology maturation guidance is adequate. GAO has identified five additional risks since its November 2010 report ( GAO-11-103 ) associated with using new technologies in the National Nuclear Security Administration's (NNSA) Uranium Processing Facility (UPF), which is to be built in three interrelated phases. These risks and the steps that NNSA is taking to address them include the following: Technology integration risks . An August 2013 UPF independent peer review team concluded that the microwave casting technology—a process that uses microwave energy to melt and form uranium into various shapes—has not been demonstrated in a relevant environment, which is a requirement to reach a key technology maturity milestone. To address this risk, NNSA officials said they plan to accelerate the procurement and environmental testing of a microwave casting prototype. Technology development risks . A key insulation material planned as a nuclear safety control during uranium casting failed a series of performance tests in fiscal year 2013. According to UPF contractor representatives, this risk is now the project's most significant technological risk. To address this risk, these representatives said they are trying to identify a replacement insulation material and exploring the use of a different safety control. Technology transition risks . NNSA is currently evaluating an alternative technology to the UPF's baseline uranium purification technology, which has been under development since 2005. The alternative technology may generate less radioactive waste and may be more efficient to operate than the baseline technology. If NNSA switches technologies, NNSA officials said that the UPF contractor (1) will have to redesign the processing area and equipment; (2) may have to add utilities; and (3) will have to revise the UPF's nuclear safety analysis, creating the potential for further project risks. Performance assurance risks . NNSA stopped development efforts on a key machining technology, which is part of the UPF's second phase. As a result, NNSA may not have optimal assurance that the technology will work as intended before starting construction. However, in January 2014, NNSA began (1) reevaluating alternatives to the UPF that may not include machining operations and (2) developing a uranium infrastructure strategy, which is a framework for how NNSA will maintain all uranium capabilities into the future. It is too soon to determine if the draft uranium strategy, scheduled to be issued in April 2014, will outline actions to address this risk. Funding risk . Instead of using UPF project funds, NNSA has primarily funded UPF technology development activities from a limited research and development program. As a result of budget constraints in this program, for fiscal year 2014, 7 of the 19 technology projects the UPF contractor considered priority were not funded. Per a corrective action plan recently developed, the UPF Assistant Project Manager for Technology is responsible for determining which technology development activities should be funded directly with UPF project funds and is to prepare a cost estimate for those activities. This official said he expects to complete these estimates in March 2014. |
Mine warfare captured the Navy’s attention during Operation Desert Storm when two Navy warships, the helicopter carrier U.S.S. Tripoli and the guided missile cruiser U.S.S. Princeton, were heavily damaged by Iraqi mines in the Persian Gulf in February 1991. The combined damage to these two ships, which totaled about $21.6 million, was caused by two mines—one estimated to cost $10,000 and the other about $1,500. Naval mines are extremely economical weapons and are readily available on the world’s arms market. The Navy has identified naval mine countermeasures—the ability to detect and disable enemy sea mines—as a critical element for establishing maritime superiority to ensure access to ports, keep sea lanes open, and support amphibious assaults. During the Cold War, the major factor in developing mine countermeasures capabilities was the ability to clear Soviet-laid mines from U.S. harbors to enable U.S. ships to break out of U.S. ports. With the fall of the Soviet Union, however, the threat of enemy mining in U.S. coastal waters has greatly diminished. Changing world conditions have caused U.S. defense planning to shift from a concept of global conventional war to a concept of regional conflicts and crises. The 1992 Navy Mine Warfare Plan detailed four critical mine warfare lessons learned from Operation Desert Storm and the actions taken by the Navy in response to those lessons. The first major lesson was that the Navy lacked a unified command structure. The mine countermeasures commander’s staff consisted of 23 individuals assembled from 21 different commands, resulting in a command staff that was ill-prepared for its task. Fortunately, the 4 months in theater before actual clearance operations provided for adequate command staff and mine countermeasures force training. The Navy has since consolidated operational command of all mine warfare forces in the Commander, Mine Warfare Command, who reports administratively and operationally to the Commander in Chief, U.S. Atlantic Fleet. His responsibilities include ensuring the readiness of the mine warfare assets, enhancing the integrated training of all mine warfare forces, conducting training exercises with other fleet units, and commanding mine warfare forces when deployed to military operations. The Mine Warfare Command is located at the Naval Air Station, Corpus Christi, Texas. Mine warfare ships are homeported nearby at Naval Station, Ingleside, Texas. Plans to move all mine hunting helicopters from Alameda, California, and Norfolk, Virginia, to Corpus Christi have not been finalized. A second lesson learned from Operation Desert Storm was the need to improve the readiness of mine warfare forces. Since that time, the Navy has conducted or participated in about a dozen exercises with U.S. and foreign naval battle groups. Mine warfare training courses have been expanded for both enlisted and officer personnel, and career paths for enlisted minemen have been revised to enhance opportunities for long-term tours of duty in mine warfare. Third, the Navy acknowledged the need to identify and acquire the necessary resources to carry out its mine countermeasures mission. In 1994, the Navy took delivery of the last of 14 mine countermeasures (MCM) ships and acquired the first 2 of 12 planned mine hunter, coastal (MHC) ships. In addition, the Navy is converting a helicopter landing ship to a mine countermeasures command, control, and support (MCS) ship. Last, the Navy recognized that it has very limited systems to counter mines in various water depths. Consequently, the Navy has established several research and development projects to address these limited capabilities. Sea mines are explosive devices hidden in the sea that can be detonated either by direct contact or indirectly at a distance by the acoustic, seismic, or magnetic signatures of passing ships. The mines can be floating, moored, bottom-laying, or buried. Sophisticated mines are equipped with electronic sensors designed to ignore certain types of ships and target others or count a specific number of ships before arming and detonating. The various methods for countering mine threats include detection and avoidance, mine hunting, influence minesweeping, and mechanical minesweeping. Mine hunting is the process of detecting, locating, and identifying mines through the use of sonar. Influence minesweeping activates electronic sensors within the mines using towed magnetic or acoustic sweep gear to detonate mines at a safe distance. Mechanical minesweeping involves the physical removal of mines using sweep wire to drag mines or cutting gear to release and float tethered mines for later detonation. The Navy’s primary mine countermeasures forces consist of ships, helicopters, and explosive ordnance disposal units. The Avenger class MCM ship, the larger and more capable of the two classes of mine countermeasures ships, is a 224-foot ocean-going mine warfare ship designed to clear mines in both coastal and offshore areas. (See fig. 1.1.) The hull is constructed of wood and glass-reinforced plastic to maintain a nonmagnetic character, which is essential to mine clearing operations. The MCM is capable of both mine hunting and minesweeping—both mechanical and influence—and is designed for conducting mine countermeasures operations worldwide. Major on-board systems include the mine hunting sonar, unmanned submersible mine neutralization vehicle, precise integrated navigation system, and standard magnetic/acoustic influence minesweeping system. The MCM ships are designed to travel at a speed of 13.5 knots. However, depending on the distance, the Navy might use heavy-lift ships to transport MCM ships to a battle site in a timely manner, which would benefit the MCM ships by reducing engine wear and tear en route to the battle site. The MCM ship program, which is managed by the Mine Warfare Ship Program Office, Naval Sea Systems Command, cost $1.8 billion over a period of 10 years. The first of 14 MCM ships was commissioned in September 1987, and the last was commissioned in November 1994. The MCM ships have a crew of 8 officers and 75 enlisted personnel. The Osprey class MHC ship, the smaller of the two classes of mine countermeasures ships, is 188 feet long and designed specifically to clear harbors and coastal waters. (See fig. 1.2.) The MHC hull is constructed of glass-reinforced plastic to provide the necessary low-magnetic character. The Mine Warfare Ship Program Office also manages the MHC ship program. The role of the MHC has always been more limited than that of the larger MCM. The MHC class of ships was designed primarily to conduct mine hunting and mechanical minesweeping within U.S. harbors and coastal waters. These ships were originally designed to be nondeployable coastal mine hunters that would have a maximum mission capability length of 5 days. However, the MHCs can be deployed and operated for longer periods of time, as long as they are provided with fuel and supplies from a close support ship. In addition, the Navy has made some ship alterations to expand the storage capacity of the MHC. The MHC ship program, which is in the production phase, will cost about $1.5 billion. The first of 12 MHC ships was commissioned in November 1993 and the second in August 1994. The Navy took delivery of the third MHC in April 1995. Construction of the 12th MHC ship began in September 1994, and delivery is scheduled in fiscal year 1999. The MHC ships have a crew of 6 officers and 46 enlisted personnel. To provide command and control functions, serve as a platform for helicopters, and support supply and logistics operations, the Navy Mine Warfare Command began converting the helicopter landing ship U.S.S. Inchon to an MCS ship in March 1995. When this conversion is completed in about March 1996, at a cost of more than $118 million, the U.S.S. Inchon will be capable of carrying an MCM Group Commander and staff and supporting long-endurance airborne, surface, and underwater MCM operations. (See fig. 1.3.) The U.S.S. Inchon, which is 25 years old, has an expected lifespan of about 10 more years. The Navy has tentative plans to design and build a new MCS ship early in the next century. The Navy’s airborne mine countermeasures assets consist of 24 MH-53E Sea Dragon helicopters and their related sweep gear. (See fig. 1.4.) The Sea Dragon, the largest heavy-lift helicopter in the West, is capable of towing a variety of minesweeping and mine hunting countermeasures gear. The airborne forces enhance surface forces by providing rapid response and deployment capability as well as the ability to sweep wider areas of the sea in a shorter time. These forces are consolidated in Squadron HM-14 based in Norfolk, Virginia, and Squadron HM-15 based in Alameda, California. Each of these squadrons is made up of 12 MH-53E helicopters. The Mine Warfare Command plans to consolidate its airborne mine warfare helicopter squadrons at Naval Air Station, Corpus Christi, Texas. Squadrons report operationally to the Commander, Mine Warfare Command. Fifteen explosive ordnance disposal units of eight personnel (one officer and seven enlisted) each report operationally to the Commander, Mine Warfare Command. These units are made up of underwater divers and demolitions experts who are trained and equipped to locate, identify, explode, disable, recover, and dispose of mines. Once mines have been located by surface or airborne forces, the units move in and detonate the mines safely or disable and retrieve them for future study. In addition, these units are capable of supporting surface and airborne mine countermeasures operations. The Navy is pursuing a number of different projects to develop new mine countermeasures capabilities or improve existing capabilities. These programs are largely developed at the Naval Coastal Systems Station in Panama City, Florida, and administered out of the Program Executive Office for Mine Warfare in Arlington, Virginia. At the request of the Chairman, Subcommittee on Military Research and Development, House National Security Committee, we examined the steps the Navy is taking to ensure a viable, effective naval force that will be ready to conduct mine countermeasures in two nearly simultaneous major regional conflicts overseas. Specifically, we evaluated the (1) status of the Navy’s research and development programs, (2) readiness of the Navy’s on-hand mine countermeasures assets, and (3) match between the Navy’s mine countermeasures assets and its mine countermeasures requirements. To determine the status of the Navy’s mine warfare research and development projects, we examined the Navy’s operational requirements documents and met with program managers to gather data on those systems the Navy is developing to meet its requirements. Further, we examined past and projected budget data to identify the funding history of the projects and estimate the delivery dates of the projects to the fleet. To determine the readiness of ships, we reviewed Status of Resources and Training System reports, high-priority requisitions, Mine Readiness Certification Inspections, and other data related to mission capability. We discussed problem parts and unreliable systems with the Mine Warfare Command, the Shore Intermediate Maintenance Activity, and the Chief of Supply, and we identified efforts to resolve these problems. We conducted a detailed analysis of the Mine Warfare Commander’s priority lists of problem systems and equipment affecting the MCM and MHC ship classes. To determine whether the Navy has identified the type and quantity of assets needed to carry out its mine countermeasures mission, we discussed the need for mine countermeasures ships and support vessels with the Commander, Mine Warfare Command. We also reviewed and analyzed reports, testimony, and requirements studies published between 1989 and 1995 by the Deputy Chief of Naval Operations, Center for Naval Analyses, Naval Audit Service, and Department of Defense (DOD) Inspector General. We visited three MCM ships, the U.S.S. Defender, the U.S.S. Gladiator, and the U.S.S. Scout, in Ingleside, Texas. We also performed our work at the Shore Intermediate Maintenance Activity, Ingleside, Texas; the Mine Warfare Command, Corpus Christi, Texas; the Office of the Deputy Chief of Naval Operations, the Naval Sea Systems Command, the Naval Air Systems Command, the Program Executive Office for Mine Warfare, the Office of Naval Research, the Bureau of Naval Personnel, and the Office of the Director of Naval Reserves, Washington, D.C.; the Office of the Commander in Chief, Atlantic Fleet Headquarters, Norfolk, Virginia; the Center for Naval Analyses, Alexandria, Virginia; and the Naval Coastal Systems Station, Panama City, Florida. We performed our review between July 1994 and July 1995 in accordance with generally accepted government auditing standards. Critical limitations in the Navy’s ability to conduct mine countermeasures at various water depths that were identified during Operation Desert Storm still exist today, and the Navy is pursuing several projects to address these limitations. However, it has not developed a long-range plan that identifies a baseline of its systems’ current capabilities and weaknesses or establishes priorities among its competing projects to sustain the development and procurement of the most needed systems. One of the significant limitations demonstrated during Operation Desert Storm was the Navy’s inability to conduct mine countermeasures in shallow waters. This capability is one of the Navy’s greatest challenges and key priorities. The Navy’s current plans to bring additional systems on line beyond 2001 in support of amphibious assaults are uncertain. The capability to conduct naval mine countermeasures is a critical element in ensuring that the Navy can project military power from the sea onto the world’s beaches in military operations. Operation Desert Storm demonstrated, and subsequent independent studies conducted by the Naval Studies Board of the National Academy of Sciences (1993) and the Johns Hopkins University Applied Physics Laboratory (1994) have documented, that no single system can provide the Navy with the capability to conduct mine countermeasures at all water depths due to the complexity of mine warfare operations and the various mines that the Navy may encounter. Therefore, the Navy must develop a set of complementary systems and tactics to effectively carry out its mine warfare operations. The mine warfare community is currently developing about 18 different projects to enhance its capability to conduct mine countermeasures at all water depths. These projects include enhancing the mine countermeasures ships’ and helicopters’ mine hunting sonars to provide greater area coverage and improve their capability to detect and classify enemy mines, upgrading the ships’ and helicopters’ minesweeping systems to provide greater output to destroy mines and improve serviceability, upgrading the ships’ mine neutralization system to provide the ships with an immediate destruction capability of identified mines, developing a mine neutralization system for the MH-53E helicopters to be used with the airborne mine hunting sonar system, and developing the capability to neutralize mines and obstacles in the surf zone. The Navy’s current approach to developing the mine warfare research and development projects has been inefficient. According to Navy officials, many of the projects have had to compete for limited financial resources, and the Navy has had to make tradeoffs among them. The Navy has started and stopped some projects repeatedly over different fiscal years to respond to changing priorities, and these repeated starts and stops have resulted in schedule delays. For example, officials explained that the airborne mine hunting sonar system (AN/AQS-20) program has experienced starts and stops that have resulted in a delay in the system’s initial operating capability. The Navy began to develop this system in the late 1970s, yet has still not brought this system on line. Officials further explained that the Navy has had to place different management teams on this project over the years and that the program has suffered from the lack of continuity in expertise. Moreover, current procurement plans for this sonar system will only allow the Navy to fund procurement of two to three systems per year, despite the fact that mine countermeasures helicopters deploy in squadrons of four. According to mine countermeasures officials, the mine warfare community will consequently have to maintain support simultaneously for two different mine hunting systems until all of the helicopters are outfitted with the upgraded sonar. The airborne mine neutralization system program has also experienced starts and stops since the program began in the mid-1970s. This program was dormant during Operation Desert Storm. It was restarted in fiscal year 1992 but canceled in fiscal year 1993. Funds were restored in fiscal year 1996. Sustaining limited financial resources for priority programs will likely become even more challenging in the future. The independent studies conducted after Operation Desert Storm by the Naval Studies Board of the National Academy of Sciences and the Johns Hopkins University Applied Physics Laboratory concluded that the use of modeling and simulations could assist the Navy in identifying its mine countermeasures priorities. A long-range plan addressing the gaps and limitations in the Navy’s mine warfare capabilities, especially its shallow water capabilities, could help the Navy maximize its limited financial resources and ensure sustained funding of its priority systems. After Operation Desert Storm, the Navy determined that its inability to clear mines and other obstacles in shallow waters is one of its greatest challenges. The Navy needs to develop this capability because enemy forces can easily lay mines and obstacles in shallow waters, since this area is closest to their shorelines and because surf action causes many mines to partially or totally bury, making them harder to detect. Without a shallow water mine countermeasures capability, the only alternative for amphibious forces would be to avoid an enemy minefield and make an approach in another area. The risk associated with this maneuver, however, is that enemy forces might intend for U.S. troops to make an amphibious landing right into harm’s way. The Navy cannot operate its mine countermeasures ships in very shallow water due to the risk of running aground or damaging their hulls. The Navy would also have difficulty towing its mine sweeping gear because of the likelihood that the gear would snag on the bottom of the ocean. The Navy is currently developing six mine countermeasures systems to clear mines and obstacles in shallow water. Since Operation Desert Storm, however, the Navy has not added any of these systems to its fleet. Moreover, the Navy has not made final decisions about additional systems to conduct mechanical sweeping, hunt for buried mines, or perform reconnaissance of mines in very shallow water. In addition, the Navy is only developing the capability to counter light and medium obstacles and has not decided what it will do to counter heavy obstacles. The mine warfare program is experiencing budget constraints, and the Navy has not fully funded its shallow water mine countermeasures projects, even though it identified this area as a priority. The Navy plans to spend about $317 million between fiscal years 1991 and 2001 in the development of its shallow water projects. However, budget documents, as of February 1995, show that unmet requirements for fiscal years 1997 through 2001 will total about $99.5 million. This figure may be understated because the Navy still has to make final decisions on some projects. Appendix I shows the Navy’s shallow water mine countermeasures projects and the shortfalls associated with each project. In addition to funding shortfalls, some of these projects are experiencing technical and developmental delays. The Navy’s Distributed Explosive Technology (DET) and Shallow Water Assault Breaching System (SABRE) programs are examples of two of these projects. Initially, the Navy planned to destroy enemy mines in the surf zone by deploying these systems from the beach into the water. The Navy has since changed its strategy and is now planning to deploy these systems from the water onto the beach off of Landing Craft Air-Cushion vehicles. This change in strategy has resulted in an initial operating capability delay of about 2 years. Due to this decision, the Navy had to redesign the rocket propulsion mechanisms to deliver these systems to the targeted area and conduct additional testing to examine the impact of launching DET and SABRE from an unstable platform. In another example, the Navy does not anticipate making final decisions about its Explosive Neutralization Advanced Technology Demonstration program until fiscal year 1998. This program is intended to enhance the capability of the DET and SABRE programs and increase the safety of Navy personnel either by using an unmanned glider to deploy the systems or enhancing the capability of Landing Craft Air-Cushion vehicles to deploy DET and SABRE from a greater distance. DOD agreed that critical limitations in the Navy’s ability to conduct mine countermeasures that were identified during Operation Desert Storm still exist today. DOD also agreed with our emphasis on the complexity of mine countermeasures and the fact that no one system can handle the mine countermeasures requirement for all types of mines at all water depths. Reliability problems and parts shortages continue to affect the readiness and performance capabilities of the Navy’s MCM ships. The Navy has been working to overcome shortcomings associated with the engines, sonars, generators, winches, and other critical systems and has made progress in resolving some of the more serious problems. However, a number of the ships’ systems and equipment are still not as reliable as predicted, and parts shortages persist. Mine warfare officials indicated that it would be several more years before all the necessary improvements could be made to the MCM ships because of the additional costs to fix the problems and changes in the ships’ schedules. The MHC ships, some of which are currently being delivered to the Navy’s fleet, are also experiencing similar reliability and supportability problems. The Mine Warfare Commander is committed to having eight MCM ships capable of deploying immediately to carry out mine countermeasures missions in two major regional conflicts occurring nearly simultaneously. The Navy uses detailed criteria to objectively determine whether each ship is fully capable of performing the wartime mission for which it is designed. As of July 1995, no MCM ship was rated fully capable of performing its mine countermeasures mission. Instead, Navy status reports show that MCM ships generally possess the resources and have accomplished the training necessary to undertake major portions of wartime mine countermeasures missions. The Mine Warfare Commander stated that each MCM ship did not have to be fully capable of performing all missions. He said that commanding officers provide a subjective assessment of their ships’ ability to perform their wartime missions and that the effectiveness rating goal was 80 percent. The Commander further commented that some ships’ mission effectiveness ratings could be increased quickly by cannibalizing missing parts from other ships. He also said that some ships that were missing certain systems or equipment could be used for portions of missions that did not require those systems or equipment that were inoperable. The Commander acknowledged that achieving acceptable mission effectiveness rates for the MCM ships has been difficult because the ships’ systems and equipment have broken down more often than expected and the Navy emphasized production schedules and program costs when building the ships and failed to order sufficient quantities of spare parts to support the ships after they became operational. He agreed that the MCM ships have had serious problems and that they were continuing to have problems, but he emphasized that progress was being made and that problems were being fixed. However, reliability problems continue to cause some MCM systems to experience more downtime than the Navy average, result in high-priority requisitions for mission-essential parts, and affect crew training. Several of the systems on the MCM ships have experienced periods of inoperability that exceed the Navy average of 15 days. These reliability shortfalls have affected the ships’ engines, combat systems, and other critical systems and equipment for several years. The foreign-made engine, in particular, has had a history of problems involving the cylinder heads, bearings, crankshafts, and actuators. Whenever the failure of a ship’s system or equipment affects the ship’s primary mission and repair is not possible within 48 hours, a report is prepared and entered into a tracking system; downtime exceeding 30 days is categorized as being unresolved for an excessive period of time. Downtime can affect the Navy’s ability to train its crews and meet readiness goals. Management reports, which track systems and equipment downtime, indicate that downtime for MCM ships continues to be significant. The Navy assigns a high-priority code to a ship’s order for parts to repair mission-essential systems and equipment if the ship cannot perform some or all of its missions while waiting for the replacement parts. About 16 percent of all requisitions by Navy ships are considered high priority. Each of the MCM ships has experienced periods in which it could not perform some or all of its missions while waiting for replacement parts ordered with a high-priority designation. From February 1994 to January 1995, the MCM fleet averaged 392 high-priority requisitions per month, or 28 per month for each ship. In some months, over 600 high-priority requisitions for mission-essential parts were processed. Table 3.1 shows the number of total and high-priority requisitions processed from February 1994 to January 1995. The Mine Warfare Commander agreed that spare parts shortages, particularly shortages of those high-priority parts that affect mission capability, have been a concern since delivery of the first MCM ship and that the shortages have been made worse because systems and equipment have not been as reliable as predicted. The Navy has been taking extraordinary efforts to correct its MCM supply support deficiencies. Over the past year, the overall percentage of high-priority requisitions for MCM ships had been reduced to the same percentage as the rest of the Navy (16 percent). The Mine Warfare Commander acknowledged that reliability shortfalls and inadequate supply support have had negative effects on crew training. He said, however, that crew rotation schedules were the primary cause of some ships not having fully trained crews and that training was sufficient to meet planned wartime commitments. At times, failures in critical systems and equipment have prevented ships from participating in planned training. For example, in September 1994, we monitored an exercise in the Gulf of Mexico (JTFX-95) from the U.S.S. Defender and the command center at Corpus Christi. We observed that the U.S.S. Dexterous and the U.S.S. Champion had engine problems and were unable to participate in the exercise and that the U.S.S. Warrior could only perform some missions after a lightning strike knocked out its sonar. The U.S.S. Defender was the only ship to participate fully and received a satisfactory evaluation for its performance in locating training mines placed in the Gulf of Mexico. The Mine Warfare Commander said that the performance of MCM ships in a May 1995 training exercise off the coast of Denmark (Blue Harrier 95) indicated significant improvement in the reliability of the ships. Although the postexercise evaluation was still underway, the Commander said the MCM ships’ reliability and performance were outstanding. The Navy has identified causes of reliability and supportability problems, initiated corrective actions, and resolved some of the problems. Navy officials commented that the MCM ships are operating longer periods of time without mission-degrading failures of the systems and equipment. However, documents show that the Navy is still in the process of identifying and quantifying the corrective actions needed and that technological challenges and funding shortages will make it difficult to address all of the necessary improvements. The Mine Warfare Command has been concerned about the reliability shortfalls of its ships’ engines, sonars, generators, winches, and other critical systems and equipment for several years. In early 1994, the Command established a priority list of key systems and equipment with problems and gave special attention to implementing long-term solutions. The list included 17 problems affecting the entire class of MCM ships. The Command has had some success with its efforts. For example, improved engine governor drives were expected to be installed on all MCM ships during fiscal year 1995, and improved water piping systems will be installed as each ship undergoes periodic maintenance. After delivery of the last MCM in November 1994, the Navy began giving priority attention to the reliability and supportability problems affecting MCM ships by establishing an admirals’ oversight council. The council is giving the highest priority to identifying and executing solutions to reliability shortfalls and ensuring that corrective actions are being identified and coordinated among responsible officials. Mine Warfare Command officials cited engine problems, inoperative combat systems, and inadequate supplies of parts among the key areas that need immediate attention. The main propulsion plant on MCM ships, which consists of four turbo-charged, 600-horsepower diesel engines, has been prone to catastrophic failures and poor reliability. The problems were so bad that during 1994 the Navy considered buying replacement engines. However, the Navy determined that this approach was not cost-effective and decided to fix the engine problems. Navy documents indicate that several factors have contributed to the engine problems, including an undersized water jacket cooler that causes the engine to overheat; fuel, oil, and exhaust leaks; and a poorly designed drive train. In addition, Navy officials said the fuel injection pump, thermocouple system, and cylinders were failing at high rates and needed immediate attention. The Navy resolved the problem in part by changing the operating profile of the engines to a cruising speed of 8 knots and replacing engine governor drives with improved drives. As of July 1995, the Navy had redesigned all drive train components and developed improved return lines, gaskets, clamps, and injection pump valves. The Navy plans to install improved versions on all ships by December 1995. The Navy is also developing a larger water jacket cooler. Although no formal replacement schedule has been developed, the Mine Warfare Commander estimates that this problem will be corrected by 1997. These actions, although helpful, have not solved all of the engine’s problems. The Navy is still determining how much funding will be needed to make the required modifications. The Navy will then have to seek this funding through future budget requests. For the long term, the admirals’ oversight council directed the Deputy Program Manager for Mine Warfare Ship Programs to explore the feasibility of purchasing replacement engines when the current engines are beyond economical repair and address the problem of obtaining funding for the replacement engines. Mine Warfare Command officials identified problems with certain key mine countermeasures combat systems that need priority attention to determine their causes and funding for proposed solutions. Among these problems, the officials noted that the Navy has not allocated funds to upgrade the navigation system on its MCM ships. It is very important that the ships know precisely where they are so they can communicate to other ships in the area the exact location of any mines that are found. The Navy has an upgraded version of its AN/SSN-2(V)4 precise integrated navigation system. According to the Mine Warfare Commander, funding will be made available, and the Navy plans to have the system on all MCM ships by December 1997. Navy officials commented that the admirals’ oversight council was giving priority attention to improving supply support for specific systems and equipment, and Navy documents show that progress is being made. For example, the officials said that parts for the foreign-made engine would soon be bought exclusively from U.S. manufacturers. Nevertheless, parts shortages are expected to persist for some time in part because the ships have multiple configurations of systems and equipment. For example, the AN/SQQ-32 sonar suite has two variants that operate essentially the same but are two very different systems for maintenance and parts support. Navy officials said they were trying to determine if funding could be made available to standardize combat system configurations and address other key problems. A Mine Warfare Command supply officer identified the most troublesome spare parts shortages that were continuing to affect operations. The officer provided a list of 15 out-of-stock parts that were causing operational problems and downtime for the engines, minesweeping gear, air conditioner, sonar system, sewage system, and main control console. Table 3.2 lists these parts. It is too soon to fully assess the capability rates of the entire class of newer MHC ships because the Navy had received only three MHCs as of May 1995. Nevertheless, in early 1994, the Mine Warfare Command identified five problem areas affecting the entire class of MHC ships. The admirals’ oversight council has also included the MHC in the scope of its work. The MHCs contain many of the same systems found on the MCMs and therefore will require the same corrective action in certain cases. For example, early versions of the MHC will have to be backfitted with improved versions of the variable depth sonar and mine neutralization system. Later versions will have the improved versions installed during production. In other cases, problems may be even more acute on the MHC. For example, Navy documents indicate that communications problems on MHC ships are more serious than those on MCM ships. MHC ships, originally designed to hunt mines off the U.S. coast, are equipped only with high-frequency radios. Since the Navy has decided that MHC ships should now be deployable overseas, satellite communications will be essential. The Navy has funding available in fiscal years 1996 and 1997 to correct the deficiencies with off-the-shelf communications equipment. However, technicians are concerned that the MHC ships may not have room for antennas or additional radio equipment and are exploring the possibility of replacing the radios with small circuit cards to perform this function. DOD agreed with our finding that reliability and supportability problems have affected the mission capability of its mine warfare ships. According to DOD, the Navy has initiated various actions that have significantly improved systems reliability. DOD also commented that the Navy is incorporating improvements into the newer ships as they are built to improve their reliability and supportability and has adopted a revised maintenance philosophy that is enhancing operational availability. The Navy is continuing its MHC procurement program at a total cost of about $1.5 billion, even though the original mission of the MHCs has largely diminished with the dissolution of the former Soviet Union. Further, the Navy is continuing this procurement program at the same time that it has other unmet critical needs, including the need to develop its shallow water mine countermeasures programs. As of September 1995, 3 of 12 planned MHC ships had been delivered to the Navy. The remaining nine ships are currently under construction and are expected to be completed by fiscal year 1999. Moreover, the MHC ship, which the Navy is currently planning to operate as a naval reserve asset, has fewer capabilities than the larger MCM ships that already exist in the Navy’s fleet. In addition, the Navy has plans to acquire a new MCS ship early in the next century. In the interim, the Navy is spending more than $118 million to modify an existing amphibious warfare ship to provide mine warfare assets with command, control, and support. The conversion is expected to be completed about March 1996. Although it is essential to provide the necessary command, control, and support during military operations, it is not necessary to have a ship dedicated solely for this effort because other ships or shore-based facilities could provide the function. The Navy’s current estimate to operate and maintain each MHC is $3.6 million per year. Further, Navy officials estimate that it will cost the Navy $4.5 million annually to operate and maintain the MCS ship. The savings that would be achieved by removing some of these ships from the Navy’s inventory could assist the Navy in achieving its other unmet critical mine countermeasures requirements. The MHC ship was initially intended to protect U.S. coastlines from Soviet mines and was not developed with an overseas mission in mind. By design, this ship class was not intended to transit across the ocean under its own power or operate on station for long periods of time, thereby reducing its ability to be a viable asset in overseas operations. In addition to its limited capabilities, the Navy is planning to make the MHC ship a reserve asset, which will further limit its role as an overseas asset. The MHC ship, which is smaller and has more limited capabilities than the Navy’s larger MCM ships, was designed to protect U.S. coastlines. The MHC ships were not intended to transit the ocean under their own power and would have to be transported by heavy-lift ships to be used in overseas contingencies. Currently, these ships can only operate at sea for a maximum of 5 days and depend on shore-based facilities for resupply. In addition, the MHC ships are limited in their missions. These ships were originally designed to conduct mine hunting operations only, although the Navy has plans to add a mechanical sweep, which will provide the MHC ships with the capability to physically remove moored mines. Mine countermeasures assets have generally been assigned to the Naval Reserve Force. The Navy plans to continue this practice by placing 11 of the 12 MHC ships in the Naval Reserve Force, which will further limit their role in future overseas operations. Generally, about 15 to 20 percent of the crew, or 8 of 52 personnel assigned to the ship, will be reservists. For the ships to serve as platforms to provide training to reservists, the ships need to be located near the reserve population serving on those ships. Therefore, it would be impractical to position these ships in overseas locations. Mine countermeasures crises during the mid-1980s and early 1990s demonstrated the need to provide mine warfare assets with command, control, and support. The Navy’s 1992 and 1994-95 mine warfare plans state that airborne and surface mine countermeasures assets require a dedicated ship for maintenance and logistics support during overseas deployments. The Navy believes that a platform is also necessary for the mine countermeasures group’s commander and staff to enhance communication with the battle group and theater commanders. However, command, control, and support can be provided from other Navy ships or from shore-based locations. Officials at the Mine Warfare Command informed us that the Navy plans to acquire one new MCS ship early in the next century. This plan, however, is tentative because no formal acquisition program is in place and no budget has been submitted for this effort. In addition, the Navy would have to shift the use of assets and rely on shore-based facilities or other naval platforms for command, control, and support during two nearly concurrent major regional conflicts because one MCS ship would not be able to support both simultaneously. The Navy is in the process of modifying the U.S.S. Inchon, an existing amphibious warfare ship, as an interim measure to provide command, control, and support to air and surface mine countermeasures forces. The Navy does not plan to have the U.S.S. Inchon and the new MCS ship in the fleet at the same time. The U.S.S. Inchon, which is already 25 years old, will only have an increased life span of about 10 years once it is converted. The Navy expects that the conversion will be completed about March 1996 at a cost of more than $118 million. As of August 1995, the Navy had already committed $99 million of the conversion dollars. The Navy estimates that operating and maintaining each MHC ship will cost $3.6 million annually. This figure includes the cost for personnel, unit operations, fuel, direct maintenance, and other indirect costs. The Navy could achieve significant savings by removing some of the ships from its inventory and address its other critical needs by applying these savings to those programs. However, the Navy is not currently exploring other options for the MHC ships. In May 1995, the DOD Inspector General reported that the Navy could deactivate 5 of the 12 planned MHC ships and put to better use $69.2 million that would be required to operate and maintain the ships during fiscal years 1996 through 2001. In addition, the Inspector General identified an additional $11 million, or $2.2 million per ship, that the Navy would unnecessarily spend to upgrade equipment on the five MHC ships between fiscal years 1996 and 2001. These upgrades include improving communications systems and installing reliability improvements on the propulsion systems. The Navy could also declare the ships to be excess capacity and explore the possibility of transferring the excess MHC ships to allied countries through the foreign military sales program. Although we did not assess the world market for mine countermeasures ships, we did note during the course of this evaluation that a number of countries around the world possess mine countermeasures fleets. Navy officials further estimate that it will cost $4.5 million annually to operate and maintain the U.S.S. Inchon. As with the case of the MHC ships, savings could also be achieved if the Navy were to decide to remove this platform from its fleet. However, because the Navy would still have to provide command, control, and support services from other Navy ships or shore locations and incur costs in doing so, it is more difficult to estimate the savings to be achieved. DOD partially agreed with our finding that the MHC’s short on-station time and reserve status would limit its role in overseas locations. DOD responded that a contract modification was in place that would increase the at-sea operational time. However, DOD also responded that the bulk of the MHC class ships are going to ultimately be assigned to the reserve forces. DOD did not agree with our finding that a dedicated MCS ship is not essential, stating that the Navy has long held the tenet that a ship that provides effective command and control needs to be deployed with the operating forces. We acknowledged in this report that command, control, and support are essential during military operations. However, we also reported that these functions could be provided from other Navy ships or shore-based locations. Therefore, we do not believe the need for an MCS ship is as great as other more pressing needs, such as the need to develop the capability to conduct shallow water mine countermeasures. DOD agreed with our finding that cost savings could be achieved by reducing the inventory of mine warfare ships, but did not agree that reducing the inventory of ships is a viable option. As discussed above, we and others believe that reducing the inventory of ships is a viable option. DOD noted that the actual annual savings associated with not operating additional MHC ships, projected at $3.6 million each, would not be completely realized due to decommissioning and deactivation costs. As previously noted, the DOD Inspector General included deactivation costs in estimated cost savings and projected a 5-year, $69.2 million cost savings after deducting deactivation costs. The experience of Operation Desert Storm revealed significant weaknesses in the Navy’s ability to conduct effective sea mine countermeasures, and the damage sustained by two Navy warships during that operation clearly demonstrated the impact that enemy sea mines and obstacles can play in military operations. The Navy has since undertaken a number of projects to improve its mine countermeasures capabilities. However, critical limitations and delays in the delivery of new capabilities remain. The Navy is pursuing a number of different projects to enhance current capabilities and develop new ones; however, it has not undertaken a total systems approach to identify a baseline of capabilities, develop alternatives, and establish priorities among those alternatives. Many of these projects have historically experienced starts and stops and are continuing to experience delays in delivery. Although the Navy has identified the ability to conduct mine countermeasures in shallow water depths as a key priority, it still has only very limited capabilities in this area. Many of the shallow water mine countermeasures projects are underfunded. The Navy has finished procuring 14 MCM ships. However, the ships are experiencing significant reliability problems and parts shortages, which affect their readiness and performance capabilities. The Navy has been working to overcome these shortcomings and has made progress in resolving some of the more serious problems. However, mine warfare officials have stated that it would be several more years before all the necessary improvements could be made due in part to limited available funding. At the same time, the Navy is continuing to procure 12 MHC ships, despite the fact that the original mission of the MHC has greatly diminished. The Navy estimates that it will cost $3.6 million per year to operate and maintain each of these ships. The Navy is also converting an amphibious ship to serve as an MCS ship. It will cost the Navy approximately $4.5 million per year to operate and maintain this ship. One of the lessons learned from Operation Desert Storm highlighted the importance of providing mine countermeasures assets with the necessary support. However, the functions that this ship will provide could be provided from other ships or on-shore locations. The Navy cannot afford to support all of its mine countermeasures projects within its mine warfare budget without continuing to experience future delays in delivering new capabilities. However, opportunities exist to realign the Navy’s mine warfare budget to direct funding toward its most critical needs. If the Navy were to deactivate five MHC ships, the Navy would save about $18 million annually. These savings, if applied to the Navy’s shallow water program, would greatly reduce the $99.5 million in budget shortfalls that the Navy has identified in that program. If the Navy were to deactivate the MCS ship as well, the Navy could achieve additional savings, although these savings are more difficult to estimate. To improve the Navy’s readiness to conduct mine countermeasures, we recommend that the Secretary of the Navy develop a long-range plan to identify the gaps and limitations in the Navy’s mine countermeasures capabilities; establish priorities among the competing projects and programs, including those in research and development; and sustain the development and procurement of the most critical systems. The Secretary of the Navy should direct particular attention to those systems required to improve the Navy’s shallow water mine countermeasures capabilities. We also recommend that the Secretary of the Navy improve the readiness of ocean-going mine countermeasures ships. If the Navy finds that the funds necessary to sustain critical research and development and improve the readiness of ocean-going mine countermeasures ships are not available, the Navy should consider using funds that otherwise would be used to operate and maintain some of the MHC ships. DOD agreed with our recommendations that the Secretary of the Navy develop a long-range plan to sustain the development and procurement of the most critically needed mine warfare systems and improve the readiness of the ocean-going MCM ships. However, DOD did not agree that the last five MHC ships should not be operated and added that the possibility of using cost savings from deactivating these ships to support other aspects of the Navy’s mine warfare program is not an option. We question the need to operate additional MHC ships given the funding shortage in the mine warfare budget, which is causing projects addressing unmet mine countermeasures needs to go unfunded. Since critical areas in Navy mine countermeasures capabilities remain unmet, we believe these areas should have higher priority than operating additional MHC ships. | Pursuant to a congressional request, GAO reviewed the Navy's efforts to improve its ability to conduct effective sea mine countermeasures (MCM) in two simultaneous major regional conflicts, focusing on the: (1) status of the Navy's research and development projects; (2) readiness of the Navy's present MCM equipment; and (3) match between the Navy's planned and on-hand MCM equipment and its MCM requirements. GAO found that: (1) the Navy must develop different systems to cover deep- and shallow-water mine clearing operations, and its shallow-water MCM capability is limited; (2) the Navy has about 18 different projects to address its MCM weaknesses, but has not set clear priorities among its mine warfare programs; (3) a long-range plan could help the Navy maximize its limited financial resources and ensure ongoing funding of its priority systems; (4) the Navy has experienced delays in new systems' deployment and has identified shortfalls of at least $99.5 million in its shallow-water projects' development; (5) the Navy's 14 oceangoing MCM ships have long-standing equipment reliability problems and parts shortages, which hinders mission performance; (6) the Navy is resolving the ships' problems, but that will take several more years; (7) the Navy is spending about $1.5 billion for 12 coastal, non-oceangoing mine hunting ships that are no longer needed, and will spend an average of $3.6 million annually to operate and maintain each of them; (8) the Navy plans to acquire a new MCM command, control, and support ship early in the next century and, in the interim, convert an older helicopter carrier at a cost of $118 million, but other existing ships and onshore locations could fulfill mission requirements at a lower cost; and (9) the Navy could save millions of dollars by deactivating some of the coastal ships and the command support ship. |
Credit unions are tax-exempt, cooperative financial institutions run by member-elected, primarily volunteer boards. To build capital, credit unions do not issue stock; they are not-for-profit entities that build capital by retained earnings. Their tax-exempt status and cooperative, not-for- profit structure separate credit unions from other depository institutions. Like banks and thrifts, credit unions are either federally or state chartered. Prior to the financial crisis, the credit union system consisted of three tiers, as shown in figure 1. As of December 31, 2007, there were 8,101 credit unions, 27 corporate credit unions, and 1 wholesale corporate credit union—U.S. Central Federal Credit Union (U.S. Central). Credit unions are owned by individual members (natural persons) who make share deposits and are provided with products and services, such as lending, investments, and payment processing. Credit unions are subject to limits on their membership because members must have a “common bond”—for example, working for the same employer or living in the same community. Corporates are owned by and serve credit unions. Corporates provide payment processing services and loans for liquidity purposes and serve as repositories for credit unions’ excess liquidity, among other things. In particular, when loan demand is low or deposits are high, credit unions generally invest excess liquidity in corporates and then withdraw funds when loan demand is high or deposits are low. Corporates meet liquidity needs with member deposits and by borrowing from U.S. Central, capital markets, or the Federal Home Loan Banks.Corporates primarily owned by U.S. Central, which functioned as a corporate for the corporates, provide the same depository and other services that corporates provide to credit unions. U.S. Central was the agent group representative for the Central Liquidity Facility (CLF), which we discuss later in this section. U.S. Central also acted as an aggregator of corporate credit union funds, which allowed them better access to the markets at better rates. While the corporate system—including both U.S. Central and the corporates—was designed to meet the needs of credit unions, the corporates face competition from other corporates and financial institutions that can provide needed services. For instance, credit unions may also obtain loans and payment processing from Federal Reserve Banks. In addition, credit unions can obtain investment products and services from broker-dealers or investment firms rather than corporates. Credit union service organizations (CUSO) also compete with corporates and offer, among other things, investments and payment processing. As we reported in 2004, corporates seek to provide their members with higher returns on their deposits and lower costs on products and services than can be obtained individually elsewhere. Credit unions and corporates are insured by NCUSIF, which provides primary deposit insurance for 98 percent of the nation’s credit unions and corporates. NCUA administers NCUSIF, collects premiums from credit unions and corporates to fund NCUSIF, and ensures that all credit unions operate in a safe and sound manner. NCUA is required to maintain NCUSIF’s equity ratio at a percentage of no less than 1.2 percent and not more than 1.5 percent of insured shares. In addition, NCUA provides oversight of the CLF, which lends to credit unions experiencing unusual loss of liquidity. Credit unions can borrow directly from the CLF or indirectly through a corporate, which acts as an agent for its members. U.S. Central was the primary agent for the CLF and was the depository for CLF funds until August 2009, when NCUA changed its investment strategy for the liquidity facility. NCUA supervises and issues regulations on operations and services for federally chartered credit unions and for both state- and federally chartered corporates. NCUA has supervisory and regulatory authority over both state- and federally chartered corporates because they provide services to federally insured credit unions. In addition, NCUA shares responsibility for overseeing state-chartered credit unions to help ensure they pose no risk to the insurance fund. NCUA categorizes corporate supervision into three categories (Types I, II, and III) based on asset size, investment authorities, complexity of operations, and influence on the market or credit union system. For example, a corporate with Type III supervision generally has billions of dollars in assets, exercises expanded investment authorities, maintains complex and innovative operations, and has a significant impact in the marketplace and on the credit union system. NCUA assigns a full-time, on-site examiner to corporates with Type III supervision. agency examinations, performs off-site monitoring, and conducts joint examinations of credit unions with state supervisory agencies. As part of its on-site examinations, NCUA assesses a credit union’s exposure to risk and assigns risk-weighted ratings under the CAMEL rating system. The ratings reflect a credit union’s condition in five components: capital adequacy, asset quality, management, earnings, and liquidity. Each component is rated on a scale of 1 to 5, with 1 being the best and 5 the worst. The five component ratings are then used to develop a single composite rating, also ranging from 1 to 5. Credit unions with composite ratings of 1 or 2 are considered to be in satisfactory condition, while credit unions with composite ratings of 3, 4, or 5 exhibit varying levels of safety and soundness problems. A similar rating system, known as the Corporate Risk Information System, is used to assess the corporates. NCUA has the authority to take an enforcement action against credit unions and corporates to correct deficiencies identified during an examination or as a result of off-site monitoring. NCUA can issue letters of understanding and agreement, which is an agreement between NCUA and the credit union or corporate on certain steps the credit union or corporate will take to correct deficiencies. They can also issue preliminary warning letters, which is an NCUA directive to a credit union or corporate to take certain actions to correct deficiencies. Further, NCUA can issue a cease- and-desist order, which requires a credit union or corporate to take action to correct deficiencies. Although not considered an enforcement action, NCUA examiners also can issue documents of resolution to record NCUA’s direction that a credit union or corporate take certain action to correct a deficiency or issue within a specified period. NCUA also has a number of options for dealing with a credit union or corporate that has severe deficiencies or is insolvent. It can place the institution into conservatorship—that is, NCUA takes over the credit union’s or corporates’ operations. After NCUA assumes control of the institution’s operations, it determines whether the credit union or corporate can continue operating as a viable entity. To resolve a credit union or corporate that is insolvent or no longer viable, NCUA may merge it with or without assistance, conduct a purchase and assumption, or liquidate its assets. In an assisted merger, a stronger credit union or corporate assumes all the assets and liabilities of the failed credit union or corporate with NCUA providing financial incentives or an asset guarantee. In a purchase and assumption, another credit union or corporate purchases specific assets and assumes specific liabilities of the failed corporate or credit union. In liquidation, NCUA sells the assets of a failed credit union or corporate. PCA is a comprehensive framework of mandatory and discretionary supervisory actions for credit unions. PCA is based on five categories and their associated net worth ratios—that is, capital as a percentage of assets (see table 1). If a credit union falls below well capitalized (7 percent net worth), the credit union is required to increase retained earnings. When NCUA determines the credit union is in the undercapitalized, significantly undercapitalized, or critically undercapitalized categories, NCUA is required to take additional mandatory supervisory actions. In addition to these mandatory supervisory actions, NCUA often enforces discretionary supervisory actions. Discretionary supervisory actions are applied to credit unions that fall into the undercapitalized category or below and include requiring NCUA approval for acquisitions or new lines of business, restricting dividends paid to members, and dismissing the credit union’s board members or senior management. Before 2010, U.S. Central and other corporate credit unions were not subject to PCA but were instead required to maintain total capital at a minimum of 4 percent of their moving daily average net assets. Total capital for U.S. Central and corporate credit unions was calculated using any combination of retained earnings, paid-in capital, or membership capital. If total capital fell below this level, NCUA required U.S. Central or the corporate to submit a capital restoration plan. If the capital restoration plan was inadequate or the corporate failed to complete the plan, NCUA could issue a capital directive. A capital directive orders the corporate to take a variety of actions including reducing dividends, ending or limiting lending of certain loan categories, ending or limiting the purchase of investments, and limiting operational expenses in order to achieve adequate capitalization within a specified time frame. From January 1, 2008, to June 30, 2011, 5 corporates and 85 credit unions failed. The five failed corporates—U.S. Central, Western Corporate (Wescorp), Members United, Southwest, and Constitution— were some of the largest institutions within the corporate system, although the credit unions that failed were relatively small. Specifically, these five failed corporates accounted for 75 percent of all corporate assets as of December 31, 2007 (see fig. 2). In contrast, the 85 credit unions that eventually failed represented around 1 percent of all credit unions and less than 1 percent of total credit union assets, as of December 31, 2007. NCUA’s OIG MLRs of the failed corporates and our analysis of historical financial data for the corporate system show that management of both U.S. Central and the failed corporate credit unions made poor investment decisions. Specifically, U.S. Central and the failed corporates overconcentrated their investments in private-label, mortgage-backed securities (MBS), investing substantially more in private-label MBS than corporate credit unions that did not fail (see fig. 3). At the end of 2007, the five failed corporates had invested 31 to 74 percent of their assets in private-label MBS. In particular, Wescorp and U.S. Central had invested 74 percent and 49 percent, respectively, of their portfolio in private-label MBS. In contrast, 10 of the 23 remaining corporates had also invested in private-label MBS but at lower levels—for example, from 1 to 19 percent. These high concentrations of private-label MBS exposed the failed corporates to the highs and lows of the real estate market, which experienced significant losses. Furthermore, corporates had significant deposits in U.S. Central, which led to indirect exposure to its high concentration of private-label MBS and losses when it failed. For example, in 2007, Members United had invested more than 40 percent of total assets in U.S. Central, and Southwest and Constitution had each invested approximately 30 percent of total assets, according to the MLRs. In addition to poor investment decisions, the business strategies U.S. Central and the other four failed corporates’ pursued contributed to their failure. Specifically, their management implemented business strategies to attract and retain credit union members by offering lower rates on services and higher returns on investments. According to the MLRs, U.S. Central shifted towards an aggressive growth strategy to maintain and increase its market share of corporates. This strategy led its management to increase its holdings of high-yielding investments, including private- label MBS. From 2006 to 2007, U.S. Central’s assets grew by 22 percent as members invested their liquid funds in return for competitive rates. The other failed corporates implemented similar business strategies. The financial crisis exposed the problems in the corporates’ investment and business strategies, leading to a severe liquidity crisis within the credit union system. Specifically, the downturn severely diminished the value and market for private-label MBS and depositors lost confidence in the corporate system because of the institutions’ substantial investment in these securities. The decline in value of these investments resulted in corporates borrowing significant amounts of short-term funds from outside of the credit union system to meet liquidity needs as credit unions reduced their deposits. However, these options became limited when credit rating agencies and lenders lost confidence in individual corporates and some lines of credit were suspended. For example, from 2007 to 2009, credit rating agencies downgraded U.S. Central’s long- and short- term credit ratings, and in 2009, the Federal Reserve Bank of Kansas City downgraded its borrowing ability. Eventually, the deterioration of the underlying credit quality of the private-label MBS led to the corporates’ insolvencies. According to our analysis of NCUA’s and its OIG’s data, the 85 credit union failures were primarily the result of poor management.Management of failed credit unions exposed their institutions to increased operational, credit, liquidity, and concentration risks, which it then failed to properly monitor or mitigate. The following describes these risks and provides examples of how exposure to these risks led to the failure of a number of credit unions. Operational risk includes the risk of loss due to inadequate or failed internal controls, due diligence, and oversight. We found that management’s failure to control operational risk contributed to 76 of the 85 failures. For example, Norlarco Credit Union’s management had weak oversight policies and controls for an out of state construction lending program and failed to perform due diligence before entering into a relationship with a third party responsible for managing it. Norlarco’s management allowed the third party to have complete control in making and overseeing all of the credit union’s residential construction loans, leading to a decline in borrower credit quality and underreported delinquencies. Potential losses from its residential construction loan program led to Norlarco’s insolvency. Management’s failure to control operational risk can also create the potential for fraud. We analyzed NCUA’s and its OIG’s data and found that fraud or alleged fraud at credit unions contributed to 29 of 85 of credit union failures. According to NCUA, credit unions with inadequate internal controls are susceptible to fraud. In addition, NCUA’s internal assessments of fraud showed that their examiners often had cited inactive boards or Supervisory Committees, limited number of staff, and poor record keeping before the fraud was discovered at the failed credit unions. For example, the OIG reported that Certified Federal Credit Union’s internal controls were severely lacking, enabling the chief executive officer to report erroneous financial results to the credit union’s board and in quarterly call reports. According to the MLR, before the fraud was identified, the credit union’s board was weak and unresponsive to repeated reports of inaccurate accounting records and weak internal controls from NCUA examiners and external auditors. The credit union was involuntarily liquidated in 2010. NCUA OIG officials told us that some other indicators of potential fraud are high ratios of investments to assets and a low number of loan delinquencies. Credit risk is the possibility that a borrower will not repay a loan or will default. We found that management’s failure to control for credit risk contributed to 58 of the 85 credit union failures. For example, Clearstar Financial Credit Union management originated and funded a significant number of loans that were poorly underwritten—that is, they were made to borrowers with poor credit histories. Management then compounded these mistakes by extending delinquent loans and poor collection practices, contributing to the credit union’s eventual failure. Moreover, management at some failed credit unions did not consistently monitor the credit risk associated with member business loans (MBL). With some limitations, credit unions can lend to their members for business purposes. However, these loans can be risky for credit unions. For example, NCUA reported in recent congressional testimony that due to the lack of credit union expertise and challenging macroeconomic conditions, over half of the losses sustained by the NCUSIF were related to MBLs for a two year period in the late 1980s. Our analysis of NCUA’s and its OIG’s data indicated that MBLs contributed to 13 of the 85 credit union failures. According to our analysis of historical financial data, failed credit unions had more MBLs as a percentage of assets than peer credit unions that did not fail or the credit union industry (see fig. 4). In addition, more than 40 percent of failed credit unions participated in member business lending. Comparatively, NCUA had testified that only 30 percent of all credit unions participated in member business lending, as of March 31, 2011. Liquidity risk is the risk that the credit union may not be able to meet expenses or cover member withdrawals because of illiquid assets. We found that liquidity risk contributed to 31 of the 85 credit union failures. For example, the management of Ensign Federal Credit Union relied on a $12 million deposit to fund credit union operations. However, when the deposit was withdrawn in 2009, the credit union lacked other funding sources to meet normal member demands and operational expenses, contributing to the credit union’s failure. Concentration risk is excessive exposure to certain markets, industries, or groups. While some level of concentration may not be avoidable, it is the responsibility of management to put in place appropriate controls, policies, and systems to monitor the associated risks. We found that concentration risk contributed to 27 of the 85 credit union failures. For example, High Desert Federal Credit Union’s management began expanding its real estate construction lending in 2003, and by 2006, its loan portfolio had more than doubled from $73 million to $154 million. In 2006, construction lending accounted for more than 60 percent of the credit union’s loan portfolio. When the housing market collapsed, its concentration in the real estate construction loans led to its insolvency. In addition to the management weaknesses in corporates and credit unions, NCUA’s examination and enforcement processes did not result in strong and timely actions to avert the failure of these institutions. The OIG found that stronger and timelier action on the part of NCUA could have reduced losses from the failures from U.S. Central and the four other failed corporates. NCUA examiners had observed the substantial concentration of private-label MBS for U.S Central and three of the four other corporates that failed prior to 2008, but did not take timely action to address these concentrations. For example, NCUA examiners observed Wescorp’s growing concentration in private-label MBS beginning in 2003; but they did not limit or take action to address this issue until 2008. Similarly, the OIG’s material loss review of Southwest Corporate cites that NCUA’s March 2008 exam concluded, “current and allowable MBS exposures are significant given the unprecedented market dislocation… Southwest’s exposure is clearly excessive.” However, the MLR did not indicate that NCUA issued a document of resolution or enforcement action to address Southwest’s high concentration. In the case of Constitution Corporate, the MLR noted that NCUA took enforcement action to address concentration limits prior to failure. Similar to its findings for corporate failures, the OIG found weaknesses in NCUA’s examination and enforcement processes for 10 of the 11 failed credit unions for which it conducted MLRs. In particular, the OIG stated that “if examiners acted more aggressively in their supervision actions, the looming safety and soundness concerns that were present early-on in nearly every failed institution, could have been identified sooner and the eventual losses to the NCUSIF could have been stopped or mitigated.” The OIG made a number of recommendations to address the problems that the financial crisis exposed. For example, to better ensure that corporate credit unions set prudent concentration limits, the OIG recommended that NCUA provide corporate credit unions with more definitive guidance on limiting investment portfolio concentrations. Based on the credit union failures, the OIG recommended that NCUA take steps to strengthen their examinations process by, among other things, improving the review of call reports and third-party relationships, as well as following up credit union actions in response to documents of resolution and the quality control review process for examinations. Appendix I contains more information on the status of NCUA’s implementation of OIG’s recommendations. NCUA took actions to stabilize, resolve, and reform the corporate system and to minimize the costs of its intervention. NCUA based these actions on four guiding principles: to avoid any interruption of services provided by corporate credit unions to credit unions; to prevent a run on corporate shares by maintaining confidence in the overall credit union system; to facilitate a corporate resolution process in line with sound public policy that is at the least possible cost to the credit unions over the long term, while avoiding moral hazard; and to reform the credit union system through new corporate rules with a revised corporate and regulatory structure. NCUA established a number of measures to ensure that corporates had access to liquidity. To resolve the failed corporates, NCUA placed five corporates—U.S. Central, Wescorp, Members United, Southwest, and Constitution—into conservatorship and isolated their nonperforming assets. To reform the system, NCUA enacted new rules to address the causes of the failures, assessed credit unions for corporate losses, forecasted the impact of future assessments through scenario tests, and took measures to reduce moral hazard. Through these actions, NCUA attempted to resolve the corporates’ losses at the least possible cost. However, we could not verify all NCUA’s estimated losses of the corporates’ and credit union failures. To provide liquidity, NCUA used two existing funds—NCUSIF and CLF— and based on legislative changes, created a temporary fund—the Temporary Corporate Credit Union Stabilization Fund (Stabilization Fund). NCUA also created four new programs—the Credit Union System Investment Program (CU-SIP), the Credit Union Homeowners’ Affordability Relief Program (CU-HARP), the Temporary Corporate Credit Union Liquidity Guarantee Program (Liquidity Guarantee Program), and the Temporary Corporate Credit Union Share Guarantee Program (Share Guarantee Program). See appendix III for more information about these programs. NCUA used NCUSIF to provide liquidity to the corporate system. As stated earlier, U.S. Central had experienced substantial losses, impairing its ability to provide liquidity to the credit union system. In December 2008, NCUA provided for a NCUSIF loan to U.S. Central to cover an end- of-year liquidity shortfall. The loan was outstanding for 3 days and then fully repaid. In January 2009, NCUA placed a $1 billion capital note in U.S. Central. NCUSIF subsequently wrote off this note when it determined the credit losses on the private label MBS (held by U.S. Central) impaired the full value of the note. To avoid compromising its borrowing authority with Treasury, NCUA changed the CLF’s investment strategy in mid-2009. Specifically, before 2009, the CLF’s funds from subscribed capital stock and retained earnings placed in a deposit account with U.S. Central, the CLF agent. However, given U.S. Central’s insolvency, NCUA moved its funds out of U.S. Central and invested them with Treasury in 2009, to avoid an adverse accounting treatment for the fund—thereby reducing the fund’s member equity and ultimately limiting its borrowing authority with Treasury. restricted from lending directly to corporates, NCUA then used funds from NCUSIF to lend $5 billion to U.S. Central and $5 billion to Wescorp. By October 2010, U.S. Central and Wescorp had repaid their loans to NCUSIF using funds raised primarily from the sale of more than $10 billion in unencumbered marketable securities that sold near their par value in August and September 2010.$10 billion CLF loan with proceeds from asset sales. In addition, NCUA used a temporary fund created by Congress in 2009 to help increase liquidity in the system. In May 2009, Congress passed the Helping Families Save Their Homes Act, which, among other things, created a temporary fund to absorb losses from corporates. the act created the Stabilization Fund, which replaced NCUSIF as the primary source to absorb the corporates’ losses. The act also amended the Federal Credit Union Act to give NCUA the authority to levy assessments over the life of the Stabilization Fund to repay the corporates’ losses instead of repaying them in a lump sum. In addition, it increased NCUA’s borrowing authority with Treasury up to $6 billion through a revolving loan fund to be shared between the Stabilization Fund and NCUSIF. Amending the Federal Credit Union Act, 12 U.S.C. §§ 1751-1795k. funds to pay down their external debt, freeing up assets that had been posted as collateral against the debt. In exchange for participating in the programs, the corporates were required to pay CLF borrowing costs to credit unions and an additional fee to the credit unions as an incentive for them to participate in the programs. CLF lending to credit unions totaled approximately $8.2 billion under CU-SIP and about $164 million under CU-HARP. All borrowings for both programs were repaid in 2010. Liquidity Guarantee Program and Share Guarantee Program. NCUA created these two temporary guarantee programs in late 2008 and early 2009 to help stabilize confidence and dissuade withdrawals by credit unions, in an attempt to avoid a run on the corporates. These programs provided temporary guarantees on certain new unsecured debt obligations issued by eligible corporates and credit union shares held in corporates in excess of $250,000. Initially, NCUA provided the coverage to all the corporates for a limited time but later provided extensions to continue guaranteeing coverage to corporates that did not opt out of the program. Based on NCUA’s 2009 financial statements, no guarantee payments were required for either program. However, as of December 19, 2011, the audited financial statements for calendar year 2010 of the Stabilization Fund were not completed and available. NCUA took a variety of steps to resolve the failed corporates and maintain corporate payment processing services for credit unions. First, in April 2009, NCUA enacted a temporary waiver to allow corporates not meeting their minimum capital requirements to continue to provide services to credit unions. In particular, the waiver allowed corporates to use their capital levels of record on their November 2008 call reports in order to continue providing the necessary core operational services to credit unions. In addition, it granted the Office of Corporate Credit Unions discretionary authority to modify or restrict the use of this capital waiver for certain corporates based on safety and soundness considerations. Without the waiver, corporates that failed to meet the minimum capital requirements would have had to cease or significantly curtail operations, including payment system services and lending and borrowing activities. As a result, the credit union system would have faced substantial interruptions in its daily operations, potentially leading to a loss of confidence in other parts of the financial system. Second, NCUA ultimately placed the five failing corporates into conservatorship. According to NCUA, it placed the corporates into conservatorships to reduce systemic exposure, exert greater direct control, improve the transparency of financial information, minimize cost, maintain confidence, and continue payment system processing. When placing the five corporates into conservatorship, NCUA replaced the corporates’ existing boards, the chief executive officers, and in some cases, the management teams and took over operations to resolve the corporates in an orderly manner. As a part of the conservatorships, NCUA set up bridge institutions for the wholesale corporate—U.S. Central—and the three other corporates. Through these bridge institutions, NCUA managed the corporates’ illiquid assets and maintained payment services to the member credit unions. The member credit unions must provide sufficient capital to acquire the operations of these bridge institutions from NCUA. Third, NCUA established a securitization program to provide long-term funding for the legacy assets formerly held in the securities portfolios of certain corporate credit unions by issuing NCUA-guaranteed notes. NCUA’s analysis showed that MBS were trading at market prices considerably below the intrinsic value that would eventually be received by long-term investors. NCUA used a method similar to the “good bank-bad bank” model that the Federal Deposit Insurance Corporation has sometimes adopted with insolvent banks to remove illiquid or “bad” assets from the failed corporates. In particular, NCUA transferred the corporates’ assets into Asset Management Estates, also known as liquidation estates. Using these estates, NCUA held and isolated the corporates’ illiquid assets (i.e., MBS) from the bridge institutions and issued the NCUA-guaranteed notes. NCUA issued $28 billion (at the point of securitization) in these NCUA- guaranteed notes, while the face value of the original MBS assets was approximately $50 billion. notes so that its value would approximate the value of the principal and interest cash flows on the underlying legacy assets. NCUA officials said that by structuring the notes in this manner, NCUA minimized its exposure in the event that the underlying cash flow was less than the notes’ value. According to NCUA’s term sheet, cash flows from the underlying securities will be used to make principal and interest payments to holders of the notes, and NCUA guarantees timely payments. NCUA issued 13 separate notes, with the final sales occurring in June 2011 and maturing between 2017 and 2021. Any necessary guarantee payments are to be made from the Stabilization Fund, which also expires in 2021. NCUA structured each of the guaranteed Finally, as of November 2011, NCUA has initiated lawsuits against parties it believes are liable for the corporates’ MBS-related losses. These lawsuits allege violations of federal and state securities laws and misrepresentations in the sale of hundreds of securities, according to NCUA. NCUA relied on external consultants—in addition to its own analysis—to estimate its losses from the failed corporate credit unions. NCUA issued a new rule for corporates to address the key causes of the failures. Among other things, the rule (1) eliminates the definition and separate treatment of the wholesale corporate or third tier of the credit union system, (2) prohibits corporates from investing in certain securities and set sector concentration limits, (3) creates a new system of capital standards and PCA for corporates, and (4) introduces new corporate governance requirements. Some parts of the new rule addresses the recommendations of NCUA’s OIG. NCUA issued the rule on October 20, 2010, and it will be implemented over a number of years. For additional information on the rule, see appendix IV. Essentially eliminate the wholesale corporate or third tier of the credit union system. The new corporate rule that NCUA issued on October 20, 2010, eliminated both the definition of and the requirements applicable to a wholesale corporate or the third tier of the credit union system. NCUA essentially eliminated the wholesale corporate, in part, to mitigate inefficiency and systemic risk in the credit union system. The failure of U.S. Central, the credit union system’s only wholesale corporate, highlights some of the risks. Specifically, its failure contributed to the failure of three corporates, instability in the other corporates, and substantial losses to the Stabilization Fund. Prohibit corporates from certain investments and set sector concentration limits. NCUA amended the corporate rule to prohibit certain investments, such as private-label MBS, and set certain sector In addition to prohibiting private-label MBS, the concentration limits. rule prohibits corporate investments in collateralized-debt obligations, net interest-margin securities, and subordinated securities. Previously, corporates were allowed to set their own sector concentration limits, which enabled them to continually increase their limits or set excessive limits. The new rule sets maximum sector concentration limits for corporate investments and addresses OIG recommendations that NCUA provide corporates with more definitive guidance on limiting investment portfolio concentrations. Corporates are limited to investing less than 1,000 percent of capital or 50 percent of total assets in specific investments, including agency MBS, corporate debt obligations, municipal securities, and government-guaranteed student loan asset-backed securities. Furthermore, corporates are restricted from investing more than 500 percent of capital or 25 percent of total assets in other asset-backed security sectors, including auto loans and leases, private-label student loans, credit card loans, or any sector not explicitly noted in the rules. NCUA has taken additional steps to mitigate the associated risk by limiting the weighted-average life of the portfolio to approximately 2 years. NCUA also tightened the limits on securities purchased from a single obligor from 50 percent of capital to 25 percent. Create a new system of capital standards and PCA for corporates. NCUA’s new corporate rule also established a revised set of capital standards for corporates and PCA framework. The new capital standards replace the existing 4 percent mandatory minimum capital requirement with three minimum capital ratios, including two risk- The risk- based capital ratios and a leverage ratio (see table 2). based capital and interim leverage ratios became enforceable on October 20, 2011, and all corporates were required to meet these capital standards. Starting in October 2011, corporates are also subject to PCA if their capital falls below the adequately capitalized level for any of the three capital ratios. As discussed earlier, a corporate becomes subject to more severe supervisory actions and restrictions on its activities if its capital continues to fall. Introduce new corporate governance requirements. NCUA has instituted a new corporate governance rule. To ensure that corporate board members have adequate knowledge and experience to oversee sophisticated corporate investment and operation strategies, they must hold an executive management position, such as chief executive offer, chief financial officer, or chief operating officer of a credit union. Corporate board members are also prohibited from serving on more than one corporate credit union board. According to NCUA, this restriction will help ensure that board members’ loyalty is undivided and that they are not distracted by competing demands from another corporate. Effective October 21, 2013, the majority of a corporate’s board members must be representatives from member credit unions. The purpose of this rule is to limit another corporate from serving other corporates rather than serving their member credit unions. In addition, the governance rules require disclosure of executive compensation and prohibit “golden parachutes”—lucrative benefits given to executives who are departing their jobs. NCUA’s audited financial statements for NCUSIF reported an allowance for loss of $777.6 million at December 31, 2010. This allowance for loss represents the difference between funds expended to close failed retail credit unions and the amounts NCUA estimates it will recover from the disposition of the failed retail credit unions’ assets. Also, these financial statements reported additional estimated losses of about $1.23 billion as of December 31, 2010, associated with troubled credit unions considered likely to fail. With respect to the Stabilization Fund, the 2010 audited financial statements were not yet final, as of December 19, 2011. NCUA officials cited ongoing challenges in resolving and valuing failed corporate assets as contributing to the delays in finalizing the Stabilization Fund financial statements. We requested documentation adequate to support NCUA’s estimates of losses from corporate failures, but NCUA was not able to provide the documentation we required. The NCUA OIG was provided with the same information that we obtained and told us that they were unable to verify NCUA’s loss estimates. Absent this documentation, it is not possible to determine the full extent of losses resulting from corporate credit union failures. Moreover, without well-documented cost information, NCUA faces questions about its ability to effectively estimate the total costs of the failures and determine whether the credit unions will be able to pay for these losses. Credit unions are responsible for repaying NCUSIF and the Stabilization Fund, and NCUA has begun to assess credit unions for those losses. NCUA borrowed taxpayer funds from Treasury to fund NCUSIF and the Stabilization Fund to provide liquidity to the corporate system and it plans to repay the debt to Treasury with interest by 2021. Since 2009, NCUA has assessed credit unions a total of about $5 billion (about $1.7 billion for NCUSIF and $3.3 billion for the Stabilization Fund). NCUA officials told us that they had analyzed the credit unions’ ability to repay by determining the impact that varying assessment levels would have on the net worth ratios of both individual credit unions and the credit union system. NCUA considers factors such as the number of credit unions that would fall below 2 percent capital or be subject to PCA’s net worth restoration plan. In 2011, NCUA levied a $2 billion assessment for the Stabilization Fund. According to NCUA officials, NCUA determined that the credit union system had enough surplus capital to pay the assessment because of its strong return on assets of 0.86 percent for first three quarters of the year. NCUA determined that the assessment would result in around 811 credit unions having a negative return on assets. NCUA officials also noted that in a typical year about 10 to 20 percent of credit unions have had a negative return on assets. According to NCUA officials, the primary driver for the $2 billion Stabilization Fund assessment in 2011 was interest and principal on maturing medium-term notes that the corporates issued and that were to be repaid by the Stabilization Fund. NCUA officials told us that if they had found that the credit unions could not afford the Stabilization Fund assessment, they would have considered other options, such as issuing additional NCUA- guaranteed notes or unsecured debt. Although NCUA officials have stated that the credit union system will bear the ultimate costs of corporate and credit union failures, risks to the taxpayers remain. However, many of the reforms are ongoing and NCUA continues to resolve the failure of U.S. Central and Wescorp, as will be discussed. Moreover, the ultimate effectiveness of NCUA’s actions and associated costs remain unknown. As a result, whether the credit union system will be able to bear the full costs of the losses or how quickly NCUA will repay Treasury is unknown. Should the credit union system be unable to repay Treasury through NCUA assessments, taxpayers would have to absorb the losses. Moral hazard occurs when a party insulated from risk may behave differently than it would behave if it were fully exposed to the risk. In the context of NCUA’s actions to stabilize the credit union system, moral hazard occurs when market participants expect similar emergency actions in future crises, thereby weakening their incentives to manage risks properly. Furthermore, certain emergency assistance can also create the perception that some institutions are too big to fail. In general, mitigating moral hazard requires taking steps to ensure that any government assistance includes terms that make such assistance an undesirable last resort, except in the direst circumstances, and specifying when the government assistance will end. For example, we previously reported that during the 2007-2009 financial crisis, the federal government attached terms to the financial assistance it provided to financial institutions such as (1) limiting executive compensation, (2) requiring dividends be paid to providers of assistance, and (3) acquiring an ownership interest—all of which were designed to mitigate moral hazard to the extent possible. NCUA designed actions to mitigate moral hazard at various stages of its effort to resolve and reform the corporate credit union system, but the effectiveness of these actions remains to be seen. Examples of the actions designed to mitigate moral hazard include terminating the corporates’ management teams and eliminating their boards, issuing letters of understanding and agreement as a condition to entering the Share Guarantee Program, requiring a guarantee fee under the Liquidity Guarantee Program, requiring credit unions to repay the losses to NCUSIF and the Stabilization Fund, filing lawsuits against responsible parties, and requiring credit unions to disclose executive compensation. In addition, NCUA enhanced market discipline by requiring corporates to obtain capital from their member credit unions to remain in operation. That is, member credit unions decided whether to capitalize new corporates. As of October 30, 2011, the two of the four bridge corporates—Wescorp Bridge and U.S. Central Bridge—had either not succeeded in obtaining sufficient member capital (Wescorp) or had not attempted to do so because of a lack of anticipated demand (U.S. Central). They are both being wound down by NCUA. Credit unions that triggered PCA had mixed results. Our analysis of credit unions that underwent PCA indicates corrective measures that were triggered earlier were generally associated with more favorable outcomes. We observed successful outcomes associated with PCA, but also noted inconsistencies in the presence and timeliness of PCA and other enforcement actions. Furthermore, in most cases, other discretionary enforcement actions to address deteriorating conditions either were not taken or taken only in the final days prior to failure. Other financial indicators could serve to provide an early warning of deteriorating conditions at credit unions. The number of credit unions in PCA significantly increased as the financial crisis unfolded (see fig. 5).June 30, 2011, 560 credit unions triggered PCA. Specifically, of the 560 credit unions that entered PCA from January 1, 2006, through June 30, 2011, the vast majority (452) triggered PCA from January 2008 through June 2011. NCUA has taken steps to stabilize, resolve, and reform the corporate system. Many of the reforms are ongoing and NCUA continues to resolve the failures of U.S. Central and Wescorp. As a result, the ultimate effectiveness of NCUA’s actions and associated costs remain unknown. Moreover, while the 2010 financial statements for NCUSIF are final—and record a loss—the 2010 financial statements for the Stabilization Fund were only recently released at the end of December 2011. Prior to the release of these statements, NCUA had estimated losses for the Stabilization Fund, but NCUA did not provide adequate documentation to allow us to verify the reasonableness and completeness of these estimates. Without well documented cost information, NCUA faces questions about its ability to effectively estimate the total costs of the failures and determine whether the credit unions will be able to pay for these losses. Before the recent financial crisis, PCA was largely untested because the financial condition of the credit unions had been generally strong since PCA was enacted. With the failure of the 85 credit unions, the PCA framework showed some weaknesses when addressing deteriorating credit unions. The main weakness of the PCA framework, as currently constructed in statute, stems primarily from tying mandatory corrective actions to only capital-based indicators. As previously reported, capital- based indicators have weaknesses, notably that they can lag behind other indicators of financial distress. Other alternative financial indicators exist or could be developed to help identify early warning signs of distress, which our analysis shows is a key to successful outcomes. Tying regulatory actions to additional financial indicators could mitigate these weaknesses and increase the consistency with which distressed credit unions would be treated. By considering which additional financial indicators would most reliably serve as an early warning sign of credit union distress—including any potential tradeoffs—and proposing the appropriate changes to Congress, NCUA could take the first steps in improving the effectiveness of PCA. Given that the 2010 financial statements for the Stabilization Fund were not available for our review and NCUA was unable to provide us adequate documentation for their estimates as well as the identified shortcomings of current PCA framework, we recommend that NCUA take the following two actions. 1. To better ensure that NCUA determines accurate losses incurred from January 1, 2008, to June 30, 2011, we recommend that the Chairman of NCUA provide its OIG the necessary supporting documentation to enable the OIG to verify the total losses incurred as soon as practicable. 2. To improve the effectiveness of the PCA framework, we recommend that the Chairman of NCUA consider additional triggers that would require early and forceful regulatory actions, including the indicators identified in this report. In considering these actions, the Chairman should make recommendations to Congress on how to modify PCA for credit unions, and if appropriate, for corporates. We provided a draft of this report to NCUA and its OIG for their review and comment. NCUA provided written comments that are reprinted in appendix V and technical comments that we have incorporated as appropriate. In its written comments, NCUA agreed with our two recommendations. Notably, NCUA stated that it had taken action to implement one of the recommendations by providing OIG with documentation of loss estimates for the Stabilization Fund as of December 31, 2010. It expects to provide additional documentation of loss estimates as of June 30, 2011, in January 2012. In its letter, NCUA also stated that the December 31, 2010, audited financial statements for the Stabilization Fund would be issued in the near future and described reasons for the delay in finalizing this audit. These reasons included the scope and magnitude of the corporate failures and the actions that NCUA had undertaken to resolve the corporate failures and strengthen its financial reporting systems. While NCUA acknowledged that some of the loss estimates were not finalized at the time of our audit, including the 2010 financial statements, it noted that the results from the valuation experts were complete and available. Our report recognizes the challenges that NCUA has faced in finalizing its financial statements and describes the actions that it has taken to stabilize, resolve, and reform the credit union system. However, as we reported, NCUA was unable to provide us with the documentation that we required to verify the reasonableness and completeness of the loss estimates for the Stabilization Fund. Subsequently, the NCUA 2010 Financial Statement Audit for Temporary Corporate Credit Union Stabilization Fund was released on December 27, 2011. Although NCUA has said that its analysis shows that the credit union system has the capacity to pay for the loss estimates, we continue to believe that without well-documented cost information, NCUA faces questions about its ability to effectively estimate the total costs of the failures and determine whether the credit unions will be able to pay for these losses. Taking the steps to address our recommendation will help NCUA address these questions. In its written comments, NCUA also described its commitment to continued research and analysis to improve the effectiveness of PCA. In particular, NCUA cited its membership on the Federal Financial Institutions Examination Council and the Financial Stability Oversight Council. NCUA also noted that it was following developments related to the federal banking agencies’ consideration of enhancements to PCA triggers, a step that we recommended in our report Banking Regulation: Modified Prompt Corrective Action Framework Would Improve Effectiveness. NCUA agreed with the recommendation to consider other triggers for PCA but noted that some of the potential financial indicators that we identified could have drawbacks. We also acknowledged in the report that multiple indicators of financial health could be used as early warning indicators and that the extent to which the financial indicators we identified could serve as strong early warning indicators might vary. Furthermore, using some of these indicators as early warning signs of distress could present different advantages and disadvantages—all of which would need to be considered. Nevertheless, we continue to believe that considering a range of potential indicators, including those identified in the report, is a necessary and important step in improving the effectiveness of PCA. NCUA’s letter also noted a potential “misconception” in the report and said that it recognized the need for timelier use of formal enforcement action, as evidenced in its response to OIG findings and recommendations. However, NCUA stated that nearly all failed credit unions received an enforceable regulatory action prior to failure, either through PCA or non-PCA authorities. In some cases, the failures occurred so abruptly that NCUA did not have a long lead time to take action. NCUA also stated that it had a strong record of employing PCA actions when credit unions tripped PCA triggers, as PCA actions are often more expedient forms of enforceable regulatory action. As discussed in the report, successful outcomes were associated with PCA in some cases. However, we also found inconsistencies in the presence and timeliness of PCA and other enforcement actions. Furthermore, we also found that other discretionary enforcement actions to address deteriorating conditions either were not taken or were taken only in the final days before the failure. Finally, the letter concluded that credit unions performed well during the recent financial crisis and that NCUA had successfully mitigated the failures that did occur. Our report describes the scope and magnitude of failures among corporates and credit unions and also notes that the 85 credit unions represented less than 1 percent of credit union assets as of 2008. Finally, we also described actions NCUA had taken to stabilize the credit union system, but we note that NCUA’s examination and enforcement processes did not result in strong and timely actions to avert the failure of these institutions. We are sending copies of this report to NCUA, the Treasury, and the Financial Stability Oversight Council, and other interested parties. The report is also available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact A. Nicole Clowers at (202) 512-8678 or [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made major contributions to this report are listed in appendix V. During the period of November 2008 to October 2011, the National Credit Union Administration’s (NCUA) Office of Inspector General (OIG) made 25 recommendations to NCUA to improve both corporate and credit union supervision, operations and financial reporting. Six of the 25 recommendations were for corporates and 19 were for credit unions. NCUA has fully implemented 6 of the 25 recommendations relating to improving the corporate structure, corporate governance, examination processes, and call report data, as well as providing guidance on concentration risk. In addition, they have partially implemented another 10 recommendations—2 of these relate to corporate risk management and corporate examiner training. The other 8 partially implemented recommendations are related to improving the credit union examination process and financial monitoring of credit unions on areas such as fast growing and new business programs, third-party relationships, concentration risk, and ensuring credit union’s take appropriate action to respond to documents of resolution (DOR). Finally, NCUA has not yet implemented another 9 recommendations—6 of these recommendations are related to improving examination processes for credit unions with more than $100 million in assets, internal controls and documenting call report analysis. The remaining 3 recommendations that were not implemented relate to improving follow-up procedures for DORs. Furthermore, OIG officials have told us that 13 of the 19 partially or not implemented recommendations will likely be fulfilled with the issuance of the revised National Supervision Policy Manual (NSPM) in 2012. OIG officials have reviewed the draft revised NSPM and determined that it addresses their recommendations. Table 3 provides a summary of these recommendations and their status based on our evaluation of the information that NCUA and its OIG provided. Legislation enacted in January 2011 requires us to examine NCUA’s supervision of the credit union system and the use of PCA. This report examines (1) what is known about the causes of failures among corporates and credit unions since 2008; (2) the steps that NCUA has taken to resolve these failures and the extent to which its actions were designed to protect taxpayers, avoid moral hazard, and minimize the cost of corporate resolutions; and (3) NCUA’s use of PCA and other enforcement actions. In addition, we reviewed NCUA’s implementation of its OIG recommendations. (See app. I.) To identify the causes of failures among corporates and credit unions, we obtained and analyzed NCUA documents, including Material Loss Reviews (MLR), postmortem reports, Board Action Memorandums (BAM), and other relevant documents. To corroborate this information, we also assessed the asset size and investment concentrations for all failed and nonfailed corporates by conducting analyses of data from SNL Financial—a financial institution database—on corporates’ investment portfolios from January 2003 to September 2010. We obtained and analyzed NCUA data related to conservatorships and resolution actions taken from January 2008 to June 2011 to determine the number and causes of corporates’ and credit union failures. We further assessed credit union member business loan participation as a percentage of total loans for both failed and their peer credit unions that did not fail from December 2005 to January 2011. To identify credit union failures related to fraud, we reviewed data, analyzed reports and documents by NCUA and its OIG on each of the failed credit unions from January 2008 to June 2011. To determine loss data from the corporates’ and credit union failures, we reviewed NCUA’s 2008, 2009, and 2010 annual reports; MLRs; BAMs; and NCUA data on losses to National Credit Union Share Insurance Fund (NCUSIF) and the Temporary Corporate Credit Union Stabilization Fund (Stabilization Fund). We interviewed NCUA’s OIG, Office of Corporate Credit Unions, Office of Capital Markets, Chief Financial Officer, and Office of Examination and Insurance to obtain their perspectives on the causes of the corporates’ and credit union failures. We further met with credit union industry associations to obtain their views on NCUA’s efforts to reform the corporate credit union system. We assessed the reliability of the SNL and NCUA data used for this analysis and determined that these data were sufficiently reliable for our purposes. To assess the steps that NCUA has taken to stabilize, resolve, and reform the corporate and credit union system, we reviewed NCUA documents and data including BAMs; MLRs; NCUA annual reports from 2008, 2009, and 2010; audited financial statements; NCUA’s Corporate Stabilization and Resolution Plan; and NCUA-commissioned reports; in addition to testimonies at relevant congressional hearings and planning documents. To determine actions taken to reform the corporate system, we reviewed NCUA’s proposed and final rules and interviewed NCUA’s General Counsel to discuss the potential impact of these rules and their effective dates. To determine NCUA’s assessments for credit unions’ and their ability to repay, we reviewed BAMs, NCUA’s scenario analyses for its credit union assessments and loss estimates, and interviewed NCUA officials. We requested detailed information on NCUA’s loss estimates for NCUSIF and Stabilization Fund; NCUA provided some information but it was not sufficient for us to determine the reasonableness and completeness of these estimates. To determine the steps that NCUA took to reduce moral hazard, we compared the actions taken to stabilize, resolve and reform the credit union system to principles cited in our past work on providing federal financial assistance. To assess the outcomes of PCA, we reviewed the outcomes of credit unions as a whole that were subject to PCA from January 1, 2006, through June 30, 2011. Additionally, we tracked a group of credit unions that were subject to PCA from January 1, 2008, through June 30, 2009, during the 2007-2009 financial crisis to identify those credit unions that (1) failed, (2) survived and remained in PCA, and (3) survived and exited PCA. To determine the actions that NCUA took to address deteriorating credit unions, we reviewed regulatory information that included CAMEL ratings, enforcement action data, and PCA-related activities over a 2 year period prior to each credit union failure from January 1, 2008, through June 30, 2011. Specifically, we analyzed the instances and dates of CAMEL downgrades, enforcement actions taken, and PCA-related actions to determine whether and when actions were taken. To assess the utility of various financial indicators in detecting credit unions’ distress, we reviewed the OIG’s MLRs, NCUA’s postmortem studies, and our previous work on PCA. credit unions that did not fail to assess their performance on numerous financial indicators, such as return on assets, operating expenses and liquid assets as an early warning of financial distress. We also compared the failed credit unions and their peers to credit union industry averages across the same period. In considering other indicators for detecting early distress in credit unions, we reviewed data from regulatory filings from the fourth quarter of 2005 through the first quarter of 2011 for three groups: (1) the 85 credit unions that failed from January 2008 to June 2011; (2) a group of 340 peer credit unions—the four closest credit unions in terms of total assets within the state as each failed credit union; and (3) all credit unions that reported their financial condition in a regulatory filing for each quarter within the period. To compare the performance of these three groups, we chose a range of indicators from the CAMEL rating that demonstrates asset quality (A), management (M), earnings (E), and liquidity (L). For assessing asset quality, we also looked at credit unions’ risk exposure and credit performance.data from SNL Financial. GAO-11-612. We assessed the reliability of the SNL Financial database and NCUA’s enforcement data used in our analyses, and found these data to be sufficiently reliable for our purposes. To determine the status of NCUA’s implementation of OIG recommendations, we reviewed the OIG’s corporate and credit union MLRs and their recommendation tracking documents and interviewed NCUA and NCUA’s OIG officials. We conducted this performance audit from May 2011 to December 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To help stabilize the credit union system, NCUA created four new programs to provide liquidity to corporates. NCUA initiated two of these new programs, the Credit Union System Investment Program (CU-SIP) and the Credit Union Homeowners’ Affordability Relief Program (CU- HARP) in early 2009. Due to the restriction preventing the Central Liquidity Facility (CLF) from lending directly to the corporate credit unions, NCUA designed both programs, CU-SIP and CU-HARP, so that the CLF would lend to the credit unions, which agreed that they in turn would invest in NCUA-guaranteed notes issued by corporates. Starting in January 2009, corporates were required to use the invested funds to pay down their external secured debt. Money from the corporates’ debt issuances were used to free up collateral and to pay back loans made by the credit unions. In exchange for participating in the programs, the corporates were required to pay CLF borrowing costs to credit unions and an additional fee to the credit unions as an incentive for them to participate in the programs. CLF lending to credit unions totaled approximately $8.2 billion under CU-SIP and about $164 million under CU-HARP. All borrowings for both programs were repaid in 2010. CU-SIP. Credit unions received a 25-basis-point spread over the cost of borrowing from the CLF for investing in 1-year CU-SIP note issued by participating corporate credit unions. Lending from the CLF for the CU-SIP started in January 2009 and ended in March 2009, totaling approximately $8.2 billion. All borrowings were repaid by the credit unions to the CLF by the respective months in 2010 (see fig. 12). CU-HARP. This 2-year program was designed to assist struggling homeowners by temporarily facilitating modifications to their monthly mortgage payments. Credit unions invested in CU-HARP Notes from participating corporates. These notes had 1-year maturities and the option to extend the date of maturity for an additional year. The extension of the program’s 1-year maturity depended on the credit union’s continued good standing and available CLF funding. The CLF lent approximately $164 million to credit unions under the CU-HARP. All remaining notes under the program matured in December 2010 and the credit unions repaid all borrowings. The corporates paid a bonus to the credit unions, which was tied to a 50 percent reduction relief in mortgage payments to homeowners. According to NCUA,CU- HARP was not very successful as the program’s design for credit unions to earn the bonus was complex and the time frame in which to apply was limited (see fig. 13). NCUA created two temporary guarantee programs in late 2008 and early 2009 called the Temporary Corporate Credit Union Liquidity Guarantee Program (Liquidity Guarantee Program) and Temporary Corporate Credit Union Share Guarantee Program (Share Guarantee Program) to help stabilize confidence and dissuade withdrawals by credit unions, in an attempt to avoid a run on the corporates by member credit unions. These programs provided temporary guarantees on unsecured offerings by corporates and shares of credit unions held by corporates in excess of $250,000. NCUA originally included all corporates under both guarantee programs for a limited time after signing a letter of understanding and agreement limiting activities and compensation. It later extended the programs to corporates chose not to opt out of the programs. Liquidity Guarantee Program. NCUA guaranteed the timely payment of The program’s principal and interest of all corporates’ unsecured debt. debt issuance deadline was September 2011, with debt maturing no later than June 2017. However, the program was later revised so that any unsecured debt issued after June 2010 would mature no later than September 2012. NCUA stated that this revision was necessary to focus on short-term liquidity needs and bring the program’s deadline in line with its other stabilization efforts (see fig. 14). Share Guarantee Program. This program largely mirrors the Liquidity Guarantee Program. That is, NCUA guaranteed credit union shares in excess of $250,000 through February 2009, with the option of continuing participation in the program through December 2010. NCUA revised the program in May 2009 to extend the program’s deadline to December 2012 and shortened the length of the program’s coverage to shares with maturities of 2 years or less (see fig. 15). In mid-2009, NCUA transferred obligations from both the Liquidity Guarantee and Share Guarantee programs to the Stabilization Fund to limit NCUSIF’s losses stemming from any future corporate losses. According to NCUA officials, NCUSIF was obligated to provide for any guarantee payments that might arise from either the Liquidity Guarantee Program or the Share Guarantee Program. Based on NCUA’s 2009 financial statements, no guarantee payments were required for either program; however, as of December 19, 2011, audited 2010 financial statements for the Stabilization Fund were not available. On September 24, 2010, the NCUA Board adopted comprehensive new rules to govern corporates. Following its initial publication the final rule, the corporate rules underwent several technical corrections and five additions to the corporate rule were published on April 29, 2011. The corporate rule affect several parts of title 12 of the Code of Federal Regulations but is codified primarily in 12 C.F.R. Part 704. This table provides an overview of the corporate rule as initially published in October 2010 and later amended in April 2011. It summarizes the major provisions at a general level and gives references to where more detailed explanations can be found in the preambles of the October 2010 and April 2011 final rulemakings. The preambles describe in considerable detail the rationales for the provisions, section-by-section analyses of each provision, what NCUA initially proposed, the comments it received and its response to them, and how the final provisions differ from those originally proposed. In addition to the contacts named above, Debra R. Johnson, Assistant Director; Emily R. Chalmers; Gary P. Chupka; Nima Patel Edwards; Debra Hoffman; Barry A. Kirby; Colleen A. Moffatt; Timothy C. Mooney; Robert A. Rieke; and Gregory J. Ziombra made significant contributions to this report. Other contributors included Pamela R. Davidson, Michael E. Hoffman, Grant M. Mallie, Jessica M. Sandler, and Henry Wray. | Corporate credit unions (corporates)financial institutions that provide liquidity and other services to the more than 7,400 federally insured credit unionsexperienced billions in financial losses since the financial crisis began in 2007, contributing to failures throughout the credit union system and losses to the National Credit Union Share Insurance Fund (NCUSIF). Since 1998, Congress has required the National Credit Union Administration (NCUA), the federal regulator of the credit union system, to take prompt corrective action (PCA) to identify and address the financial deterioration of federally insured natural person credit unions (credit unions) and minimize potential losses to the NCUSIF. Legislation enacted in 2011 requires GAO to examine NCUAs supervision of the credit union system and use of PCA. This report examines (1) the failures of corporate and credit unions since 2008, (2) NCUAs response to the failures, and (3) the effectiveness of NCUAs use of PCA. To do this work, GAO analyzed agency and industry financial data and material loss reviews, reviewed regulations, and interviewed agency officials and trade organizations. From January 1, 2008, through June 30, 2011, 5 corporates and 85 credit unions failed. As of January 1, 2008, the 5 failed corporates were some of the largestaccounting for 75 percent of all corporate assetsbut the 85 failed credit unions were relatively smallaccounting for less than 1 percent of total credit union assets. GAO found poor investment and business strategies contributed to the corporate failures. Specifically, the failed corporates over concentrated their investments in private-label, mortgage-backed securities (MBS) and invested substantially more in private-label MBS than corporates that did not fail. GAO also found that poor management was the primary reason the 85 credit unions failed. In addition, NCUAs Office of Inspector General has reported that NCUAs examination and enforcement processes did not result in strong and timely actions to avert the failure of these institutions NCUA took multiple actions to stabilize, resolve, and reform the corporate system. NCUA used existing funding sources, such as the NCUSIF, and new funding sources, including the Temporary Corporate Credit Union Stabilization Fund (Stabilization Fund), to stabilize and provide liquidity to the corporates. NCUA placed the failing corporates into conservatorship and liquidated certain poor performing assets. In order to decrease losses from the corporates failures, NCUA established a securitization program to provide long-term funding for assets formerly held in the portfolios of failed corporates by issuing NCUA guaranteed notes. To address weaknesses highlighted by the crisis, in 2010, NCUA issued regulations to prohibit investment in private-label MBS, established a PCA framework for corporates, and introduced new governance provisions. NCUA considered credit unions ability to repay borrowings from Treasury and included measures to reduce moral hazard, minimize the cost of resolving the corporates, and protect taxpayers. While NCUA has estimated the losses to the Stabilization Fund, it could not provide adequate documentation to allow NCUAs Office of Inspector General or GAO to verify their completeness and reasonableness. Without well-documented cost information, NCUA faces questions about its ability to effectively estimate the total costs of the failures and determine whether the credit unions will be able to pay for these losses. GAOs analysis of PCA and other NCUA enforcement actions highlights opportunities for improvement. For credit unions subject to PCA, GAO found those credit unions that did not fail were more likely subject to earlier PCA actionthat is, before their capital levels deteriorated to the significantly or critically undercapitalized levelsthan failed credit unions. GAO also found that for many of the failed credit unions, other enforcement actions were initiated either too late or not at all. GAO has previously noted that the effectiveness of PCA for banks is limited because of its reliance on capital, which can lag behind other indicators of financial health. GAO examined other potential financial indicators for credit unions, including measures of asset quality and liquidity, and found a number of indicators that could provide early warning of credit union distress. Incorporating such indicators into the PCA framework could improve its effectiveness. NCUA should (1) provide its Office Inspector General the necessary documentation to verify loss estimates and (2) consider additional triggers for PCA that would require early and forceful regulatory action and make recommendations to Congress on how to modify PCA, as appropriate. NCUA agreed with both recommendations. |
The United States has, for many years, funded various agencies’ educational, visitor, and democracy-assistance programs that promote democratic ideals, including freedom of the press. Although considered a fundamental human right by many, freedom of the press remains unrealized in many parts of the world, particularly in countries governed by repressive regimes. Journalists continue to be censored, tortured, imprisoned, and murdered for publishing articles or broadcasting information about their government. Media assistance emerged as a significant aspect of development work in the 1980s and 1990s, particularly following the end of the Cold War and the dissolution of the former Soviet Union. Media development aid has evolved from relatively modest activities with minor donations of equipment and training tours for journalists to, in some cases, long-term, multifaceted projects with millions of dollars invested over the life of the project. Independent media development efforts are not clearly defined, but are commonly understood to include activities such as training or educating local or indigenous reporters and editors on subjects such as media ethics, professionalism, accountability, investigative journalism, media business management and marketing, strategies for transforming state broadcasters into public service networks, and legal defense or legal regulatory issues; developing media or press centers; developing journalism schools and curriculum; ensuring the financial sustainability and independence of media outlets, through loan programs, advertising development, grants for commodities, and other means; supplying equipment or helping to build infrastructure needed to ensure media independence, including technical capacity; developing professional journalist, publisher, or broadcast associations; developing networks of independent media, such as sharing arrangements, which link production, distribution, and management of material; supporting the establishment of legal and regulatory frameworks and advocacy groups that protect freedom of the press; promoting an understanding of professional media practices and the role of free and independent media in society; and engaging diplomatically to advance the development of press freedoms or media-related institutions, laws, and regulatory frameworks. The Department of State and USAID are primarily responsible for funding and overseeing U.S. media development projects and activities. State and USAID do not have separate global or agency-specific independent media development strategies and goals; rather, State and USAID often consider independent media development part of broader agency goals. State’s independent media development efforts are generally used as tools within broader public diplomacy and democracy building efforts. USAID’s independent media development efforts are generally designed to promote the development of civil society and increase citizen access to information. A commonly agreed upon definition of independent media development programs does not exist among State, USAID, and other donors. Rather, a variety of U.S. projects and activities support independent media in various countries overseas through individual contracts, grants, or cooperative agreements with NGO partners, or through other established U.S. programs, such as exchange programs administered by embassy public affairs sections. In addition, donors frequently use different approaches for developing independent media. For example, State offers training opportunities to a select number of individuals in the media sector or offers small grants to organizations for media development. NED provides small, short-term grants to media or advocacy organizations in many countries. In contrast, USAID has developed a more comprehensive, multiyear, multiproject approach to developing independent media in many countries that addresses the training and education of journalists, financial sustainability of local organizations, and development of the supporting legal and regulatory frameworks. Five primary U.S. nongovernmental organizations—IREX, Internews, the International Center for Journalists, Eurasia Foundation, and The Asia Foundation—assist U.S. donors by implementing media development projects and offering funding or programmatic activities to local media organizations. In addition, due to political sensitivities in the region, USAID has awarded contracts to private organizations for media development projects in the Middle East. Examples of possible independent media development recipients include media outlets, media organizations, and local nongovernmental organizations; professional associations; journalism schools or universities; and policymakers. In addition, there are several international organizations that support media development. (See app. II). See table 1 for a description of the roles of each bureau or office at State and USAID and select U.S. NGOs in independent media development. Our analysis of available documents revealed that together, State and USAID obligated at least $40 million in fiscal year 2004 to support a number of independent media development efforts. According to State, it obligated approximately $14 million for media development projects for fiscal year 2004. State also transferred more than $700,000 to the BBG for fiscal year 2004 independent media development obligations. USAID was not able to provide global budget obligations figures for its 2004 support of independent media. However, we calculated that USAID obligated at least $25.6 million in fiscal year 2004. USAID’s largest independent media contractors—Internews and IREX— received fiscal year 2004 obligations of $14.1 million and $11.3 million, respectively. In addition, the Asia Foundation identified that it received $175,000 in fiscal year 2004 obligations provided by USAID. Although we were not able to confirm these figures, USAID officials told us that they obligated an average of $33 million per year for independent media development efforts since 1991 in amounts ranging from approximately $13 million in fiscal year 1992 to $61 million in fiscal year 1999. We found that the largest portion of the State and USAID fiscal year 2004 obligations for independent media development—about 60 percent of all the agency obligations we could identify—funded efforts in Europe and Eurasia. The Middle East, which has the lowest level of press freedom, according to Freedom House’s 2005 Press Freedom survey, received only about 2 percent of the total fiscal year 2004 obligations we could identify. Agency officials said that the larger funding levels for Europe and Eurasia are attributable to the democracy assistance funding provided through the Freedom Support Act and the Support for East European Democracy Act of 1989 and the high priority given to independent media development projects by the Office of the Coordinator of U.S. Assistance to Europe and Eurasia. According to State officials, independent media development funding levels for the Middle East are expected to increase in the future due to an expansion of efforts through the Middle East Partnership Initiative. In addition, USAID officials said they expect that USAID will provide up to four times the amount of media development funding to individual countries in the Middle East in the near future—with the U.S. Mission in Egypt already in the process of launching a $15 million media project. Officials at one mission in Central Europe expressed concern that such a funding shift could be detrimental to the ultimate success of media development efforts in European countries that have fragile and changing media environments. Due to a variety of factors, it is difficult to accurately determine U.S. funding obligations for independent media development efforts. USAID media development funding is difficult to track globally over time because the agency has not implemented consistent agencywide budget codes to document its obligations for cooperative agreements, grants, and contracts for independent media projects and activities. Rather, USAID’s financial systems are designed to collect obligation information at the higher strategic objective level, where, we were told by USAID officials, there are inconsistencies in coding independent media activities because definitions for budget codes and strategic objectives have changed over the years. However, USAID officials told us they are currently in the process of developing systems to better track agencywide obligations data for individual program components under each strategic objective, including for independent media development efforts. State Department funding is also difficult to track because State does not keep systematic records or budget codes of its obligations at the level of independent media development activities and posts consider varying activities to embody independent media development. Finally, complex donor funding arrangements, including in some cases multiple project implementers and subgrantees, can obscure funding relationships and make it difficult to accurately determine the overall level of U.S. financial support, as well as the number of specific efforts provided in individual countries. State and USAID have a variety of independent media development efforts under way. State has not widely established specific independent media development performance indicators for the overseas missions we reviewed or for specific media projects or activities sponsored by its embassy public affairs sections. USAID frequently established specific independent media development performance indicators for its missions and for specific independent media development projects we reviewed. Both agencies commonly used the IREX Media Sustainability Index (MSI) and Freedom House’s Press Freedom surveys to measure performance— where indicators were established; however, our analysis found these indexes to be of limited utility in measuring the contributions of specific media activities, or the efforts of entire missions toward developing independent media in particular countries, when used alone. State and USAID support a wide range of media projects and activities, from training journalists to supporting media law reform. In the countries we visited—Croatia, Ukraine, and Indonesia—we spoke with several individuals who said that they had benefited from U.S. government media support. For example, we met with members of a consortium of five local NGOs advocating passage of Indonesia’s Freedom of Information Act and working with the Parliament to get it placed on the agenda. In Croatia, we visited a U.S.-funded national association of journalists whose mission is to raise the professional standards of its 2,000 members. In Ukraine, we met with individuals of a U.S.-sponsored organization that has provided 220 training programs, in subjects ranging from technical production to media management, to over 2,800 media professionals. We also spoke with a number of journalists in all three countries who had visited television, radio, and newspaper operations throughout the United States as part of embassy exchange programs. See table 2 for a description of current U.S. independent media development efforts and priorities in countries we selected for in-depth analysis. While State’s independent media activities conducted at overseas missions support U.S. objectives in these countries, performance indicators were not widely established for the activities, making it difficult for State to accurately measure and report their value. At four of the nine countries we reviewed, State had developed some media-related performance indicators to measure the overall results of the missions’ independent media development efforts. For instance, for Kyrgyzstan, State currently measures the results of the embassy’s efforts in developing independent media and improving the availability of political information in several ways, including by surveying whether editors and journalists that receive support become more skilled in reporting and editing political news. However, aside from counting the number of participants, specific performance indicators for individual embassy-sponsored independent media projects or activities were not widely established in the cases we reviewed. For example, embassy officials in Croatia said there were no measurable performance indicators tracked for their journalism exchanges and other media-related public diplomacy efforts. Several State Department officials told us that posts rely heavily on their knowledge of the activities and anecdotal reports of accomplishments to evaluate performance. In some instances, embassy public affairs sections submit reporting cables to State Department bureaus and offices or enter descriptions of media projects or activities and anecdotal information into a database managed by the Bureau of International Information Programs. State’s Democracy, Human Rights, and Labor (DRL) bureau has, in some cases, used quantifiable indicators, including the number of local radio stations that broadcast sponsored programs or the number of articles written as a result of journalist training seminars, to measure the performance of independent media projects related to democracy assistance, in addition to gathering descriptive or anecdotal information on accomplishments. State officials told us that embassies are more likely to develop independent media-specific performance indicators for evaluating results when independent media is a priority at the post and specific performance goals are set in mission-planning documents. For example, the current mission plan for Kyrgyzstan includes a stated goal of helping to build independent media that reports objectively and freely. Officials also said that posts are not currently required to develop specific indicators for individual public diplomacy projects and activities; however, a requirement for the establishment of such measures is currently being considered. Additionally, officials in State’s Middle East Partnership Initiative office told us the office plans to develop measures for the effectiveness of its new media assistance project in the Middle East, but could not provide details because the initiative is still being designed. State officials we spoke with told us it is difficult to develop performance indicators with limited staff and funding, as well as the inherent difficulties in determining when and how results will occur for public diplomacy-related efforts. In the cases we reviewed, USAID performance indicators for independent media efforts were frequently established at the country or USAID mission level and for individual projects. For example, six of the nine USAID missions we reviewed established performance indicators in their current planning documents for their missions’ independent media performance objectives. In addition, all missions we obtained documentation from had established performance indicators for country-specific projects. USAID officials told us that the establishment of specific independent media performance objectives is left to the discretion of the local USAID mission and that some missions with active independent media development projects or activities may not choose to designate media-related performance objectives based on their relative priorities, or they may view media development as a crosscutting issue or as a tool for accomplishing other specific objectives. See table 3 for a list of the objectives and performance indicators for USAID missions in the countries we reviewed. In the cases we reviewed, State and USAID often selected media indexes, such as the Media Sustainability Index (MSI) and Freedom House’s Press Freedom survey, to measure the results of their independent media development efforts. The MSI and the Press Freedom survey assess the freedom of media in a country; however, when used alone as performance indicators, media indexes are of limited utility in measuring the specific contributions of specific activities or combined U.S. efforts toward developing independent media in particular countries. State and USAID commonly use media indexes to measure the performance of independent media efforts. In cases we reviewed where State had specifically defined performance indicators for its independent media development efforts, Freedom House’s Press Freedom survey and MSI were frequently used by the mission for measuring results. In the cases we reviewed, all four State missions that designated performance indicators relied on media indexes to measure the performance of their efforts. For example, the U.S. Mission to Bosnia-Herzegovina designated the MSI as its primary performance indicator for its independent media efforts. USAID missions we reviewed also frequently used the MSI and the Press Freedom survey as measures of performance. Of six USAID missions that established indicators for their performance goals, three used the media indexes as performance indicators. Some missions, including the USAID Missions to Ukraine and Kyrgyzstan, used the MSI along with other measures they had created to measure the accomplishment of performance objectives. However, the USAID Mission to Croatia used the media indexes alone to measure performance objectives related to independent media development. In addition, the only performance indicators established for the USAID media project in Croatia were the four broad MSI components, including “journalists professional standards improved in Croatia” and “multiple news sources provide citizens with reliable and objective news.” USAID officials told us that the MSI index is generally promoted and used as an independent media development performance indicator in Europe and Eurasia and that it is generally used in coordination with more specific indicators of activities to determine program performance. Media indexes used alone are of limited use for determining the performance of U.S. independent media development programs. Commonly used media indexes—such as the Press Freedom Survey and MSI in particular—cannot pinpoint the effects of U.S. government programs, and are general indicators rather than precise measures. These indexes use reasonably consistent methodologies to measure broad concepts such as press freedom and media sustainability. However, because the indexes focus on broad concepts that are affected by a wide variety of social, political, and economic factors, they have limited utility for purposes of identifying the effects of particular U.S. media development programs. The indexes do provide general measures of trends and allow for some cross-country comparisons. However, IREX has only been collecting data on the MSI for 3 years, which makes it impossible to evaluate longer term trends and establish baselines for efforts that began before 2001. Another concern is the time lag in the data of 1 year from scoring to publication. Freedom House and IREX officials told us that the Press Freedom survey and MSI were not designed to measure the performance of U.S. media development programs. According to a senior Freedom House official, the Press Freedom survey was initially intended to inform debate and discussion about the state of media development in particular countries, and potentially could be used to prod particular countries to liberalize their media. Freedom House’s Press Freedom survey has been used to assess the freedom of the media in more than 100 nations since 1981. The Press Freedom survey evaluates countries’ legal, political, and economic environments, scoring between 8 and 12 subcategories. According to IREX officials, the MSI was designed, with the support of USAID, to be used for making prioritized decisions on funding. IREX’s Media Sustainability Index has assessed the sustainability of independent media in about 20 countries in Europe and Eurasia since 2001. The MSI measures five objectives—free speech, professional journalism, plurality of news sources, business management, and supporting institutions—each of which includes between 7 and 9 subcategories. Freedom House and IREX officials both stated that use of the indexes for anything other than what they were designed for imply an unwarranted precision to their measures. Some State and USAID officials indicated that they do not think media indexes alone are comprehensive indicators for measuring mission or project performance and supported the development of additional measures in some cases. However, they also told us that it is difficult to develop their own independent media development performance indicators for several reasons. In addition to funding constraints, agencies noted that there are also difficulties separating media efforts from broader goals and determining when and how results will occur for democracy-related or public diplomacy programs. Some USAID officials in the field noted that USAID officials in Washington, D.C., supported using the MSI as a primary performance indicator and some USAID officials noted they viewed using the MSI as a cost-effective means to provide a common indicator to measure and compare the results of efforts in Europe and Eurasia. In all the cases we reviewed, countries faced changing political conditions or deficiencies in the legal, regulatory, or professional environments, which created challenges for planning and implementing independent media development efforts. In some cases, programmatic factors, such as unsustainable local partner organizations or lack of coordination at overseas missions, affected overall U.S. efforts or specific projects or activities in a country. The following media development challenges represent a sample of those frequently mentioned during our review. A country’s political conditions can impact efforts to plan and implement independent media development projects and activities. In January 2004, USAID surveyed its independent media development efforts, as well as those supported by other donors, and determined that different programmatic approaches are required for five different types of political societies, which USAID classified as: (1) closed, (2) semidemocratic/developing, (3) war-torn, (4) postconflict, and (5) transition. For semidemocratic, postconflict, or transitional countries making progress toward democracy or no longer experiencing conflict, USAID has identified a variety of activities to support the development of an independent media. However, in closed or war-torn societies, USAID determined it can do very little because the environments are unsuitable for outside intervention. See table 4 for definitions of political societies and further detail on the appropriate programmatic media strategies identified by USAID. We examined independent media development projects in nine different countries—Bosnia-Herzegovina, Croatia, Egypt, Georgia, Haiti, Indonesia, Krygyzstan, Mali, and Ukraine—each experiencing differing domestic political conditions that limit the impact of these projects. In some of the cases we reviewed, changes in domestic conditions or the status of political societies occurred following the onset of independent media development activities, creating further challenges in implementing efforts in these countries. For example, in Haiti—a nation experiencing civil conflict—violent demonstrations and protests prior to the departure of the president prevented some USAID-funded media development projects from continuing because staff were physically unable to get to work. Officials told us that several radio stations suffered extensive damage from looters, and community radio stations reported several cases where police, as well as government officials loyal to the president, tried to use their power to silence independent media voices. After the president’s departure, all nonessential USAID staff were ordered to evacuate the country, and the media project was on hold for nearly a month. In countries with deficient legal, regulatory, or professional environments, agencies can face challenges in implementing independent media development projects and activities. All nine of the countries we reviewed faced challenges due to deficiencies in at least one of these areas, which impacted efforts to train the media, build the capacity of the media outlets, and improve the freedom of the press within the country. In particular, these deficiencies have led to such challenges as limited press freedom due to direct government control over the media industry; changing legal and regulatory frameworks; limited training opportunities; and lack of skilled journalists due to widespread problems in professional and educational systems. Agency officials provided examples of how such deficiencies have impacted their programs: Limited press freedom. Prior to the revolution in Kyrgyzstan, the Kyrgyz government maintained a tight hold on broadcast frequencies, prevented new stations from obtaining frequencies, and canceled frequencies of certain independent outlets. Agency officials said that journalists were afraid to broadcast on certain topics for fear of harassment or prosecution. In Georgia, most television stations are owned by oligarchs, many of whom support the new government. According to embassy officials in Tbilisi, working journalists exercise self-censorship for fear that reports critical of the government would be unpopular with their owners. Changing legal and regulatory frameworks. Although Ukraine’s new president stated publicly his support for a free mass media, State officials said Ukraine’s legal and regulatory environments still need assistance. Though legislation has been enacted to improve freedom of the press and oversight of the media industry, these changes have not been consistently applied by Ukrainian judges and media outlets. Therefore, journalists can still be pressured by government officials and oligarchs to report information in a certain way, and media outlets’ legal status and license to operate remain in question. Limited training opportunities. Since 1993, Mali’s constitution has made it relatively easy to obtain radio broadcast licenses for FM frequencies. However, officials noted that that there are currently no in- country professional training institutions for broadcast media. As a result, individuals have to go outside of Mali to receive training, or obtain informal training from their peers and colleagues. Lack of skilled journalists. In Croatia, most journalists have little academic or professional training. Agency officials stated that although independent media is evolving, journalists still report biased news and information, do not check their facts or sources, do not follow up or correct their errors, and skew the focus of articles to accomplish personal agendas. According to USAID’s January 2004 media assistance study, USAID has funded a range of activities designed to further promote legal and regulatory reforms, though undemocratic structures, politicians, and slow- to-change traditions have made the creation of enabling laws, policies, and practices difficult or impossible in some cases. Assistance projects and training efforts have been designed to mitigate legal, regulatory, and professional deficiencies, though progress of these programs has been slow. Agency officials from missions in several countries we examined provided examples of approaches to addressing unregulated media environments, including the following: Limited press freedom. In order to limit editorial interference by state bodies, USAID’s media project in Kyrgyzstan currently supports local efforts to draft a new broadcasting law, which would include stipulations for the transformation of state television and radio to a public broadcasting system. To dilute the editorial influence of oligarchs who own the vast majority of TV stations in Georgia, USAID’s implementing partner in Tbilisi introduced a television rating system, which produced verifiable ratings that made the commercial market far more attractive to advertisers. The increased interest of advertisers in the media market has made nonbusiness-based policies more costly for oligarch owners. Changing legal and regulatory frameworks. USAID’s media development project in Ukraine has established a Media Law Institute that will provide journalists with an outlet for legal defense and consultations when faced with political pressure. The center also plans to train local lawyers and judges on media law reform, and to publish bulletins about changes in legislation. Limited training opportunities. The USAID Mission to Mali has tried to address the lack of professional media training institutions by supporting a technical training facility, bringing professionals to Mali to conduct training sessions, and sending broadcast and print journalists as well as key members of the government and civil society to an anticorruption ethics training seminar. Lack of skilled journalists. Croatia’s USAID media development project focused on developing the capacity of the national journalist association, including conferences to improve journalists’ professionalism, their capacity for reporting, and their relationships with other sectors of society, such as the police and judiciary. Additionally, University of Zagreb’s journalism school partnered with the U.S. Embassy to participate in academic exchange programs, international visits, and speaker programs. The sustainability of local organizations can impact the overall results of media development efforts or the success of specific projects and activities in a country. Additionally, limited coordination and lack of communication with local recipients at some posts have impacted some projects and activities by causing confusion of responsibilities or duplication of efforts. The success of media development projects and activities can be impacted by the sustainability of local partners. We found that seven of the nine countries we reviewed had cases where local media outlets had difficulty ensuring their financial sustainability as their U.S. funding decreased. Sustainability challenges were primarily due to a poor economic environment or lack of sufficient business management training. Specific examples include the following: Poor economic environment. An official from the USAID Mission in Haiti stated that because many independent radio stations are community owned, the stations cannot increase their operating budgets or replace expensive pieces of equipment without first increasing the financial resources available to the entire community. Additionally, the self-sustainability of private media outlets in Bosnia-Herzegovina continues to be a major problem due to widespread crime and corruption and a national unemployment rate of about 40 percent. Lack of business management training. According to one local television station owner in Croatia, a U.S.-sponsored national television network, designed to link several local station’s news programs, is struggling to survive because the network did not develop the advertising revenue and profit-sharing structures necessary to keep it financially sustainable. USAID acknowledged that this may be the case, but they viewed the network project as a success because it had served to provide an alternative, independent news program to the state- controlled TV network during an earlier period of political transition. To respond to these programmatic challenges, some USAID officials offered the following suggestions: Poor economic environment. The USAID Mission to Bosnia- Herzegovina has focused on encouraging local business development strategies, and currently financially supports the survival of only a select number of media outlets. The USAID Mission in Mali told us that because of the country’s high poverty rate, they conduct workshops for radio stations in order to provide them with small-business concepts that can be used to generate additional outside revenues, like the sale of solar power to provide lighting or the creation of centers to provide the community with computer services and Internet access. Lack of business management training. Since 2002, Georgia’s USAID media project has worked to promote the sustainability of print and broadcast media outlets by improving their business management skills and establishing an independent and credible national system of television audience measurement. As a result of better information on the profile of viewers, TV advertising in Georgia increased from $3 million to $7 million in 2004 and is expected to increase to $13 million by 2006. Various studies have also offered suggestions for addressing the sustainability of media outlets. A working paper by the Netherlands Institute of International Relations on “International Media Assistance” suggested allowing more time during the life of a project to focus on sustainability. Another report published by USAID, Media Assistance: Policy and Programmatic Lessons, suggested that in postconflict societies, only media outlets willing to take concrete and concerted steps toward economic independence should be given technical or financial assistance. According to this study, USAID has implemented several activities that promote the financial independence or sustainability of media outlets, but these activities have achieved only limited success. While not as widespread as other programmatic challenges, we found that four of the nine countries we examined were challenged by coordination issues, such as an unclear chain of command and limited communication, which resulted in confusion over the responsibilities of donors and providers of media development, duplication of efforts, or periods of program inactivity. For example, the director of a Croatian media development project worked with three different U.S. donors, with no clear chain of command established. Thus, the director was unsure to whom he should report under certain circumstances, resulting in difficulty in reacting to urgent needs. In another case we reviewed, State and USAID had unknowingly funded different NGOs that were working independently to rebuild the same radio stations that had been destroyed during the recent tsunami in Indonesia, leading to on-the-ground project conflicts. Officials at the USAID Mission to Indonesia told us this duplication of effort resulted from their lack of awareness of a grant awarded by State’s DRL bureau in Washington, D.C., that was similar to the grant USAID awarded. Poorly maintained roads, combined with poor phone and Internet access, contributed to communication and coordination challenges faced by the USAID Mission in Haiti and the community radios it supports; this, in turn slowed USAID’s training activities, the delivery of equipment, and other activities. USAID officials said they are planning to install Internet and phone lines in rural areas to improve the situation. One example of effective coordination can be found in Ukraine. Ukraine is challenged by a complicated network of donors, providers, and recipients (see fig. 1), multiple ongoing projects, various funding sources, and agencies funding the same organizations and similar activities. For example, four separate organizations, including the U.S. Embassy (via the Media Development Ffund), Internews Network (via a cooperative agreement via the USAID mission), the International Renaissance Foundation, and NED (via its annual grant from State), currently provide U.S.-sponsored funding or programmatic activities to the advocacy and media monitoring organization Telekritika. However, in Kiev, USAID and State officials have worked well together to minimize coordination problems by keeping track of donor awards on a Web site and attending donor coordination meetings on a monthly basis. According to USAID officials, the Web site “Marketplace for Donors” is funded jointly by State (the U.S. Embassy in Kiev, public affairs section) and the International Renaissance Foundation. Media evaluations have made specific suggestions to improve the coordination of donors, providers, and recipients of independent media development programming in order to minimize the confusion of responsibilities and duplication of efforts. An evaluation by the University of Oxford, “Mapping Media Assistance,” suggested donors and providers coordinate the distribution of their limited resources in a systematic and logical manner, based on their areas of specialization. The Netherlands Institute of International Relations working paper on “International Media Assistance,” suggested establishing a strategic coordination mechanism, like the European Media Agency for the European Union, that could serve as a clearinghouse and evaluator of all media-related assistance proposals for the targeted countries. To address challenges in coordination, USAID funds regional media conferences and has conducted a limited number of independent media program evaluations, so that participants can share lessons learned; however, these efforts face funding constraints. USAID has funded six independent media development regional conferences in Europe and Eurasia and one multiregional conference over the past 8 years. These conferences have brought together journalists, media development donors, providers, and civil society organizations to discuss issues in journalism that transcend borders. USAID has also designated the Bureau for Policy and Program Coordination to conduct several assessments of independent media programs in various countries and identify lessons learned and best practices. In addition, USAID bureaus and missions have conducted several different types of studies on independent media efforts, including midterm assessments, final reports, and program evaluations. According to the Policy and Program Coordination bureau director, USAID’s independent media evaluations have created a body of knowledge and lessons learned on subjects ranging from conflict areas to transitional countries. However, USAID media officials noted that the discontinuation of funding for conferences and limited funding levels for evaluations could reduce the amount of collaboration and sharing of lessons learned officials said is necessary to enhance media development programming efforts. Additionally, several media officials indicated that in some instances insufficient funding for USAID program evaluations has forced media development providers to fund their own evaluations through their project budgets, thus reducing funds available for development activities. Although USAID requires its evaluations to be posted on the Development Experience Clearinghouse to make them accessible to other posts, one senior official said it was unclear to what degree the lessons learned from evaluations are shared or used by missions. For example, one official in Croatia said that program evaluations are shared only within the region due to concerns that other countries’ approaches may not be relevant. We provided a draft of this report to the Secretary of State and the USAID Administrator for their review and comment. State generally concurred with our report, and USAID offered technical comments that were incorporated, as appropriate. In addition, State indicated that it plans to develop additional performance indicators and promote best practices in the future. The comments provided by State are reprinted in appendix IV, and comments by USAID are reprinted in appendix V. We are sending copies of this report to other interested Members of Congress. We are also sending copies to the Secretary of State and the Administrator of the U.S. Agency for International Development. We will also make copies available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff has any questions about this report, please contact me at (202) 512-4268 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. To accomplish our objectives, we reviewed documentation and spoke with officials from the Department of State (State), the U.S. Agency for International Development (USAID), the Broadcasting Board of Governors (BBG), and key U.S. nongovernmental organization (NGO) partners, including the National Endowment for Democracy (NED), the International Research and Exchanges Board (IREX), Internews, The Asia Foundation, the Eurasia Foundation, and the International Center for Journalists. In addition, we reviewed USAID’s guidance for performance measurement. Department of Defense media activities were not included in the scope of our work as its primary focus in the media field is on conducting psychological operations. In addition to audit work performed in the United States, we traveled to and reviewed documentation on U.S.-sponsored independent media development programs in Croatia, Ukraine, and Indonesia. These countries were primarily selected based on geographic representation; preliminary estimates on funding and years of assistance provided; and the range of programs offered. During travel to Croatia, Ukraine, and Indonesia, we met with State Department and USAID officials; multiple nonprofit, private donor, and multilateral officials; and program recipients to discuss issues of coordination, funding, measuring of program effectiveness, and challenges faced when implementing foreign independent media development programs. We also sent questions to and reviewed select documentation from posts in Bosnia-Herzegovina, Egypt, Georgia, Haiti, Kyrgyzstan, and Mali. In order to determine estimates for agency fiscal year 2004 obligations, we obtained data from State, USAID, the BBG, and select NGOs. Assessments of the reliability of the data yielded mixed results, but provided an overall indication of the minimum level of funding for the agency. USAID’s historic budget obligations from USAID’s Democracy, Conflict, and Humanitarian Assistance bureau proved to be unreliable because (1) USAID historic budget records on media development programs are incomplete after 1996 because agencywide budget codes related to media activities were discontinued at this time; (2) USAID budget records were not finalized for fiscal year 2004; and (3) historic funding codes could not be recoded or configured to accurately reflect the specific activities of missions falling under our definition of independent media development. In addition, although USAID officials indicated that individual missions currently track spending for various program components—including media development—independent media projects can often be defined differently or be intermixed within broader civil society projects; thus, missions may record media funding levels inconsistently. Given this determination, we instead obtained USAID fiscal year 2004 obligations from NGOs that USAID identified as the main implementers of independent media development projects. In particular, we gathered documentation separately from the International Center for Journalists, Internews, The Eurasia Foundation, the Asia Foundation, and IREX. USAID officials told us that the true figure for USAID fiscal year 2004 obligations is likely significantly higher than our estimate because (1) we were not able to obtain documentation from all NGOs that received independent media development grants from USAID headquarters; (2) we were not able to obtain data on fiscal year 2004 obligations awarded directly by USAID missions to local NGOS; and (3) we may not have captured all budget accounts that funded obligations for fiscal year 2004. We gathered State Department fiscal year obligation data by obtaining documentation from the following bureaus or offices: Democracy Human Rights and Labor (DRL), the Office of the Coordinator of U.S. Assistance to Europe and Eurasia (EUR/ACE), Educational and Cultural Affairs (ECA), International Information Programs (IIP), Middle East Partnership Initiative (MEPI), and State’s regional bureaus. We requested the bureaus and offices include 2004 budget obligations that met our definition of media assistance programs and exclude programs funded by the State Department via interagency transfers to USAID or BBG. To assess the reliability of the obligation data, we (1) posed a standard set of questions to State officials, and (2) reviewed the list provided for consistency with our definition of media assistance programs. According to State officials, some variation existed in the techniques used to compile the programs and budget obligations. For example, some bureaus or agencies relied on electronic databases to gather information, while others did not have these systems. We found the list of programs to be consistent with the media assistance program definition in our request. We determined that the data provided by State were sufficiently reliable to provide an estimate of 2004 budget obligations for media assistance programs. We were not able to specifically determine NED’s fiscal year 2004 obligations from State for independent media development projects because NED receives several broad grants each year for its work to support democratic initiatives. However, we were able to obtain information from NED on the amount in subgrants for media development activities it awarded during fiscal year 2004. We determined fiscal year 2004 obligations data provided by the BBG to be sufficiently reliable following an interview with BBG officials to assess data reliability. The key factors in making the determination were that BBG (1) used one budget account for the program area, and (2) routinely performed checks on the reliability of the database used. To address our objective of examining agency performance measurement for independent media development efforts, we also (1) reviewed available agency, country, and program-level performance documentation for the case study countries; and (2) assessed the principle media development indexes—Freedom House’s Press Freedom survey and the IREX Media Sustainability Index (MSI). Our analysis of the Press Freedom survey and the IREX MSI included interviews with officials at the organizations responsible for the indexes and interviews with State and USAID officials to determine the strengths and limitations of the data. To address the challenges that the United States faces in implementing media development activities and achieving results, we interviewed or requested information from State and USAID officials in Bosnia- Herzegovina, Croatia, Egypt, Georgia, Haiti, Indonesia, Kyrgyzstan, Mali, and Ukraine. State and officials at all nine missions were asked to list the challenges their mission has dealt with while implementing media development programs and provide specific examples of how each challenge impeded the effectiveness of their program. The officials were also asked to explain the steps their mission took to mitigate these challenges. Although the challenges provided could not be generalized worldwide, we believe that the steps taken to mitigate the challenges, or lessons learned, should be shared globally. Lastly, we reviewed several media development studies published between 2000 and 2005 by State, USAID, the Knight Foundation, University of Oxford, Freedom House, IREX, Foreign Affairs, Netherlands Institute of International Relations, UNESCO, the United Kingdom’s Department for International Development, World Bank Institute Development Studies, and Routledge Group. We did not review these studies for sufficiency of methodology. Provides major source of funding for media development at the European level as part of its larger program of human rights and democratization. Includes both macroprojects, implemented in partnership with international organizations (like the Office of Security and Cooperation in Europe, or OSCE) that work with local entities, and microprojects that directly fund local organizations. Organization for Security and Cooperation in Europe (OSCE) Supports freedom of the press and freedom of information by providing training for journalists and technicians, setting up radio stations, and monitoring freedom of information in the media. OSCE also assists and advises governmental authorities as well as print and electronic media in their endeavour to reform the media sector. Concentrates on projects addressing issues of democratic media legislation, monitoring violations of media freedom, protecting journalists, establishing self-regulation systems and strong independent professional organizations, and raising the professionalism of journalists and media managers. United Nations Educational, Scientific and Cultural Organization (UNESCO) Provides training to journalists and technical media staff to strengthen independent media, establishes independent printing plants and print distribution networks, and develops public service broadcasting— including the establishment of a regulatory framework and support for TV productions and co-productions. Promotes global access to information by strengthening the legal and regulatory environment for freedom and pluralism information, supporting capacity strengthening, networking, and elevation of standards of media at national and local levels; raising awareness on rights to official access to information; and developing communication mechanisms for vulnerable groups. Supports civil society with direct funding support—often provided in partnership with other international aid donors—to back programs such as information technology access and human rights. Diana Glod, Melissa Pickworth, Julia A. Roberts, and Joe Carney made key contributions to this report. Martin de Alteriis, Ernie Jackson, Amanda K. Miller, and Valerie J. Caracelli provided technical assistance. | Independent media development led by the Department of State and the U.S. Agency for International Development (USAID) supports the national security goal of developing sustainable democracies around the world. Independent media institutions play a role in supporting commerce, improving public health efforts, reducing corruption, and providing civic education. According to the Freedom House's Freedom of the Press 2005 survey, despite important gains in some countries, the overall level of press freedom worldwide continued to worsen in 2004. GAO was asked to examine (1) U.S. government funding for independent media development overseas; (2) the extent to which U.S. agencies measure performance toward achieving results; and (3) the challenges the United States faces in achieving results. The Department of State generally concurred with our report and USAID offered technical comments that were incorporated, as appropriate. In addition, State indicated that it plans to develop additional performance indicators and promote best practices in the future. The Department of State and the U.S. Agency for International Development obligated at least $40 million in fiscal year 2004 for the development of independent media, including activities such as journalism and business management training and support for legal and regulatory frameworks. About 60 percent of the fiscal year 2004 USAID and State obligations we identified supported independent media development projects in Europe and Eurasia. However, precise funding levels are difficult to identify due to a lack of agencywide budget codes to track media development obligations, differing definitions of independent media development, and complex funding patterns. State and USAID face challenges in designing performance indicators and accurately measuring and reporting results directly tied to the performance of U.S. independent media efforts. The tools most frequently used by State and USAID as performance indicators--Freedom House's Freedom of the Press survey and the IREX Media Sustainability Index--are useful for determining the status of the media in selected countries but are of limited utility in measuring the specific contributions of U.S.-sponsored programs and activities toward developing independent media in countries when used alone. Several country-specific and programmatic challenges can impede the implementation of media development efforts, including a changing political condition, sustainability of local media outlets, and coordination between donors and providers. Specifically, a country's changing political condition or lack of adequate civic and legal institutions can create challenges for a mission to plan, implement, and measure the results of its efforts. The sustainability of program recipients can also impede the overall success of efforts or specific activities at the country level. In addition, when coordination of activities is unstructured or informal, redundancies and confusion of responsibilities can impact project implementation. |
A component of DHS, the Coast Guard is a multimission military service that serves as the principal federal agency responsible for maritime safety, security, and environmental stewardship. In addition to being one of the five Armed Services of the United States, the Coast Guard serves as a law enforcement and regulatory agency with broad domestic authorities. In its most recent Posture Statement, the Coast Guard reported having nearly 49,900 full-time positions—about 42,600 military and 7,300 civilians. In addition, the service reported that it has about 8,100 reservists who support the national military strategy or provide additional operational support or surge capacity during times of emergency, such as natural disasters. The Coast Guard also reported that it utilizes the services of approximately 29,000 volunteer auxiliary personnel who conduct a wide array of activities, ranging from search and rescue to boating education. The Coast Guard has responsibilities that fall under two broad mission categories—homeland security and non-homeland security. Within these categories, the Coast Guard’s primary activities are further divided into 11 statutory missions, as shown in table 1. For each of these 11 missions, the Coast Guard has developed performance measures to communicate agency performance and provide information for the budgeting process to Congress, other policymakers, and taxpayers. Each year, the Coast Guard undergoes a process to assess performance and establish performance targets for the subsequent year. In May 2009, the Coast Guard published its most recent performance report, which presents the service’s accomplishments for fiscal year 2008. To help carry out its missions, the Coast Guard has a large-scale acquisition program, called Deepwater, under way to modernize its fleet. The Deepwater program now includes projects to build or modernize five classes each of vessels and aircraft, as well as to procure other capabilities such as improved command, control, communications, computer, intelligence, surveillance, and reconnaissance systems. To carry out these acquisitions, the Coast Guard awarded a contract in June 2002 to Integrated Coast Guard Systems (ICGS), a joint venture formed by Lockheed Martin Corporation and Northrop Grumman Ship Systems, to serve as a systems integrator. However, in April 2007, the Coast Guard acknowledged it had relied too heavily on contractors. This reliance, among other concerns, contributed to an inability to control costs. As a result, the Coast Guard initiated several major changes to the acquisition approach to Deepwater, the key one being that the Coast Guard would take over the lead role in systems integration from ICGS. The Coast Guard’s budget request for fiscal year 2010 is $9.73 billion, which is approximately $393 million (or 4.2 percent) more than the service’s enacted budget for fiscal year 2009 (see table 2). These calculations do not include either the supplemental funding of $242.5 million that the Coast Guard reported receiving in fiscal year 2009 or the $240 million provided by the Recovery Act (discussed below). When the supplemental and the Recovery Act funding are taken into account and added to the fiscal year 2009 enacted budget, the calculations reflect a decrease of about 1 percent from fiscal year 2009 to fiscal year 2010. Of the $9.73 billion requested for fiscal year 2010, about $6.6 billion, or approximately 67 percent, is for operating expenses (OE). The OE account is the primary appropriation that finances the Coast Guard’s activities, including operating and maintaining multipurpose vessels, aircraft, and shore units. In comparing the 2010 budget request to the 2009 enacted budget, funding for the OE account represents an increase of $361 million (or about 6 percent). The next two largest accounts in the fiscal year 2010 budget request—each with funding at about $1.4 billion—are the acquisition, construction, and improvements account (AC&I) and the retired pay account. Collectively, these two accounts represent about 28 percent of the Coast Guard’s total budget request for fiscal year 2010. In terms of percentage increases in comparing the 2010 budget request to the 2009 enacted budget, the retired pay account reflects the highest percentage increase (about 10 percent) of all accounts. According to the Coast Guard, some of the key initiatives for fiscal year 2010 include increasing the number of marine inspectors and investigative officers, and supporting financial management improvements, among others. Furthermore, as a result of the emergence of the U.S. Global Positioning System (a space-based system of satellites) as an aid to navigation, the long-range radio-navigation system known as LORAN-C (a terrestrial-based system operated by the Coast Guard) is expected to be terminated in fiscal year 2010. This termination, according to the Coast Guard, is projected to result in a savings of $36 million in fiscal year 2010 and additional savings of $154 million over the following 4 years. Although the Coast Guard receives funding by appropriation account rather than by individual missions, the Coast Guard provides an estimated comparison of homeland security versus non-homeland security funding as part of its annual budget request. Based on these estimates, the Coast Guard’s fiscal year 2010 budget request for homeland security missions represents approximately 36 percent of the service’s overall budget, with the non-homeland security funding representing approximately 64 percent. However, as a multimission agency, the Coast Guard notes that it may conduct multiple mission activities simultaneously. For example, a multimission asset conducting a security escort is also monitoring safety within the harbor and could potentially be diverted to conduct a search and rescue case. As a result, it is difficult to accurately detail the level of resources dedicated to each mission. Figure 1 shows the Coast Guard’s estimated funding levels for fiscal year 2010 by each statutory mission. In addition to the Coast Guard’s enacted budget for fiscal year 2009, the Coast Guard has received $240 million of funding under the Recovery Act. According to the Coast Guard, the service’s Recovery Act funds are to be allocated as follows: $142 million is to be used to fund bridge alteration projects in four states—the Mobile Bridge in Hurricane, Alabama; the EJ&E Bridge in Devine, Illinois; the Burlington Bridge in Burlington, Iowa; and the Galveston Causeway Railroad Bridge in Galveston, Texas. $88 million in Recovery Act funds is to support shore infrastructure projects—construction of personnel housing, boat moorings, and other improvements—in Alaska, Delaware, North Carolina, Oregon, Virginia, and Washington. $10 million is to help upgrade or replace worn or obsolete components on the Coast Guard’s fleet of 12 High Endurance Cutters. The 40-plus-year-old cutters benefiting from the Recovery Act-funded projects are based in Kodiak, Alaska; Alameda and San Diego, California; Honolulu, Hawaii; Charleston, South Carolina; and Seattle, Washington. While the Coast Guard’s budget has increased considerably since 2003, the long-term budget outlook for the agency is uncertain. From fiscal year 2003 through fiscal year 2009, the Coast Guard’s budget increased an average of 5.5 percent per year. However, this administration’s current budget projections indicate that the DHS annual budget is expected to remain constant or decrease over the next 10 years. It is important to note that these budget projections are nominal figures, which are not adjusted or normalized for inflation. Thus, if inflationary pressures arise in future years, budgetary resources available to DHS could be further strained. Given the uncertainty of future budgets, it remains important for the Coast Guard to ensure that limited resources are utilized most effectively to successfully manage existing challenges and emerging needs. For example, as we reported in March 2008, affordability of the Deepwater program has been an ongoing concern for many years, and will continue to be a major challenge to the Coast Guard given the other demands upon the agency for both capital and operations spending. The increasing demand for Coast Guard resources in the arctic region also presents an emerging challenge that will need to be balanced against competing priorities. For example, two of the Coast Guard’s three polar icebreakers are more than 30 years old and, and in 2008 the Coast Guard estimated that it could cost between $800 million to $925 million dollars per ship to procure new replacement ships. Such needs could pose challenges to the Coast Guard in an era of increased budget constraints. Each year, the Coast Guard conducts a process of performance evaluation, improvement planning, and target setting for the upcoming year. According to the Coast Guard, this process helps ensure that the performance measures and associated targets adequately represent desired Coast Guard mission outcomes, are reflective of key drivers and trends, and meet applicable standards for federal performance accounting. In addition, as part of a larger DHS effort, the Coast Guard conducted a more comprehensive evaluation of its performance measures in fiscal year 2008. This evaluation process included input on potential improvements to the Coast Guard’s performance measures from the DHS Office of Program Analysis and Evaluation and us. Consequently, the Coast Guard initiated a number of changes to its performance reporting for fiscal year 2008 to better capture the breadth of key mission activities and the results achieved. Our review of the Coast Guard’s performance reporting for fiscal year 2008 indicates that the Coast Guard revised or broadened several existing measures. As a result, the Coast Guard reported on a total of 21 primary performance measures for fiscal year 2008—3 homeland security mission measures and 18 non- homeland security mission measures. This represents a substantial change from previous years, in which the Coast Guard reported on a single performance measure for each of the service’s 11 statutory missions (see app. I for a list of the primary performance measures and reported performance results for fiscal years 2004 through 2008). One of the principal changes involved the disaggregation of existing measures into several distinct component measures. For example, in prior years, the marine safety mission was assessed using one primary measure—the 5- year average annual mariner, passenger, and recreational boating deaths and injuries. However, the Coast Guard reported on six different measures for the marine safety mission in fiscal year 2008—annual deaths and injuries for each of three separate categories of individuals (commercial mariners, commercial passengers, and recreational boaters) as well as 5- year averages of each of these three categories. As indicated in table 3, the Coast Guard reported meeting 15 of its 21 performance targets in fiscal year 2008. Also, table 3 shows that the Coast Guard reported meeting all performance targets for 5 of the 11 statutory missions—ports, waterways, and coastal security; drug interdiction; marine environmental protection; other law enforcement; and ice operations. Regarding the drug interdiction mission, for example, the fiscal year goal was to achieve a removal rate of at least 28 percent for cocaine being shipped to the United States via non- commercial means. The Coast Guard reported achieving a removal rate of 34 percent. For another 3 of the 11 statutory missions—aids to navigation, search and rescue, and marine safety—the Coast Guard reported partially meeting performance targets. For each of these missions, the Coast Guard did not meet at least one performance target among the suite of different measures used to assess mission performance. For example, regarding the search and rescue mission, which has two performance goals, the Coast Guard reported that one goal was met (saving at least 76 percent of people from imminent danger in the maritime environment), but the other goal (saving at least 87 percent of mariners in imminent danger) was narrowly missed, as reflected by a success rate of about 84 percent. For the other 3 statutory missions—defense readiness, migrant interdiction, and living marine resources—the Coast Guard reported that it did not meet fiscal year 2008 performance targets. However, for these missions, the Coast Guard reported falling substantially short of its performance target for only one mission—defense readiness. Although performance for this mission rose slightly—from 51 percent in fiscal year 2007 to 56 percent in fiscal year 2008—the Coast Guard’s goal was to meet designated combat readiness levels 100 percent of the time. However, the Coast Guard remains optimistic that the relevant systems, personnel, and training issues—which are being addressed in part by the Deepwater acquisition program—will result in enhanced capability for all missions, including defense readiness. Yet, the Coast Guard further noted in its annual performance report that it is reviewing the defense readiness metrics to determine what potential changes, if any, need to be made. In comparison, the Coast Guard met targets for 6 of its 11 statutory missions in fiscal year 2007. The overall reduction in the number of missions meeting performance targets in fiscal year 2008 is largely because of the inability of the Coast Guard to meet its performance target for the migrant interdiction mission. However, this may be attributed, in part, to the new measure used for the migrant interdiction mission for fiscal year 2008. Regarding the three statutory missions whose performance targets were not met, the Coast Guard’s reported performance generally remained steady in fiscal year 2008 compared with previous years, and the Coast Guard was relatively close to meeting its performance targets. For example, for the migrant interdiction and living marine resources missions, the Coast Guard reported achieving over 96 and 98 percent of the respective performance targets. The Coast Guard faces a number of different management challenges that we have identified in prior work. Highlighted below are four such challenges that the Coast Guard faces as it proceeds with efforts to modernize its organization, address shifting workforce needs, manage the Deepwater acquisition program, and mitigate operational issues caused by delays in the Deepwater program. The Coast Guard is currently undertaking a major effort—referred to as the modernization program—which is intended to improve mission execution by updating the service’s command structure, support systems, and business practices. The modernization program is specifically focused on transforming or realigning the service’s command structure from a geographically bifurcated structure into a functionally integrated structure—as well as updating mission support systems, such as maintenance, logistics, financial management, human resources, acquisitions, and information technology. The Coast Guard has several efforts under way or planned for monitoring the progress of the modernization program and identifying needed improvements. For example, the Coast Guard has established timelines that identify the sequencing and target dates for key actions related to the modernization program consistent with project management principles. Our prior work has shown that such action-oriented goals along with associated timelines and milestones are critical to successful organizational transformation efforts and are necessary to track an organization’s progress toward its goals. However, as we reported in June 2009, the Coast Guard’s efforts to develop applicable performance measures to evaluate results of the modernization program remain in the early stages. For example, the Coast Guard has begun to identify key internal activities and outputs required for mission execution within the realigned organizational structure. This effort, expected to be completed in summer 2009, is intended as a preliminary step before identifying associated business metrics that can be used to evaluate how the modernization program has impacted the delivery of core services and products. However, Coast Guard officials were still in the process of developing a specific time frame for the estimated completion of this next step. As outlined in the Government Performance and Results Act of 1993 and Standards for Internal Control in the Federal Government, performance measures are important to reinforce the connection between long-term strategic goals and the day-to-day activities of management and staff. In April 2008, to evaluate aspects of the modernization program and identify potential improvements, the Coast Guard engaged the National Academy of Public Administration (NAPA) to conduct a third-party, independent review. After completing its review, NAPA provided a report to the Coast Guard in April 2009. The report recognized that the Coast Guard’s planned organizational realignment “makes logical sense” and that the service’s leadership “is collectively engaged” to improve mission execution and support-related business processes. NAPA cautioned, however, that the Coast Guard remains in the early stages of its organizational transformation. To help mitigate potential implementation risks and facilitate a successful modernization process, NAPA recommended, among other steps, that the Coast Guard develop a clear quantifiable business case for modernization, measurement tools, and a process of metrics assessment to track modernization progress and the effects on mission execution. Similar to GAO’s findings, NAPA concluded that one of the key challenges faced by the Coast Guard is the development of adequate measures to assess the progress and outcomes of the modernization program. NAPA noted that such measures are important to ensure that the impacts of modernization are aligned with intended objectives and that they provide an opportunity to “course-correct” as necessary. NAPA further noted that the development of appropriate measurement tools will help to provide quantifiable support for the modernization business case and facilitate stakeholder buy-in. After receiving NAPA’s report, the Coast Guard established a new organizational entity—the Coast Guard Enterprise Strategy, Management and Doctrine Oversight Directorate. Among other functions, this directorate is to be responsible for strategic analysis, performance management, and ongoing coordination of change initiatives within the modernization effort and beyond. Generally, it has been noted by Congress and supported by our past reviews that the Coast Guard faces significant challenges in assessing personnel needs and providing a workforce to meet the increased tempo of maritime security missions as well as to conduct traditional marine safety missions such as search and rescue, aids to navigation, vessel safety, and domestic ice breaking. Workforce planning challenges are further exacerbated by the increasingly complex and technologically advanced job performance requirements of the Coast Guard’s missions. Workforce planning challenges include managing the assignments of military personnel who are subject to being rotated among billets and multiple missions. As we have previously reported, rotation policies can affect, for example, the Coast Guard’s ability to develop professional expertise in its personnel and to retain qualified personnel as they progress in their careers. In October 2008, the Coast Guard received congressional direction to develop a workforce plan that would identify the staffing levels necessary for active duty and reserve military members, as well as for civilian employees, to carry out all Coast Guard missions. The workforce plan is to include (1) a gap analysis of the mission areas that continue to need resources and the type of personnel necessary to address those needs; (2) a strategy, including funding, milestones, and a timeline for addressing personnel gaps for each category of employee; (3) specific strategies for recruiting individuals for hard-to-fill positions; and (4) any additional authorities and resources necessary to address staffing requirements. In response, the Coast Guard plans to provide Congress with a workforce plan this summer. As part of our ongoing work for the House Transportation and Infrastructure Committee, we plan to review the Coast Guard’s workforce plan. The scope of our work includes assessing whether the Coast Guard’s workforce plan comports with the parameters set out by DHS guidanceand contains the elements that we previously reported as being essential for effective workforce plans. Our scope will also include assessing the Coast Guard’s related workforce initiatives, such as the Sector Staffing Model and the Officer Specialty Management System. As an example of its workforce planning challenges, the Coast Guard cites continued difficulties in hiring and retaining qualified acquisition personnel—challenges that pose a risk to the successful execution of the service’s acquisition programs. According to Coast Guard human capital officials, the service has funding for 855 acquisition-program personnel (military and civilian personnel) but has filled 717 of these positions, leaving 16 percent of the positions unfilled, as of April 2009. The Coast Guard has identified some of these unfilled positions as core to the acquisition workforce, such as contracting officers and specialists, program management support staff, and engineering and technical specialists. In addition, the Coast Guard has begun to address several workforce planning challenges raised by Congress related to its marine safety mission. In November 2008, the Coast Guard published the U.S. Coast Guard Marine Safety Performance Plan FY2009-2014, which is designed to reduce maritime casualties, facilitate commerce, improve program processes and management, and improve human resource capabilities. The Coast Guard recognized that marine safety inspectors and investigators need increased competency to fulfill this mission. The plan sets out specific objectives, goals, and courses of action to improve this competency by building capacity of inspectors and investigators, adding civilian positions, creating centers of expertise specific to marine safety, and expanding opportunities for training in marine safety. As noted, the challenge for the Coast Guard is to successfully implement this plan, along with the others we have described above. In addition to workforce planning challenges, the Coast Guard faces other acquisition-related challenges in managing the Deepwater program. The Coast Guard has taken steps to become the systems integrator for the Deepwater program and, as such, is responsible for planning, organizing, and integrating the individual assets into a system-of-systems to meet the service’s requirements. First, the Coast Guard has reduced the scope of work performed by ICGS and has assigned those functions to Coast Guard stakeholders. For example, in March 2009, the Coast Guard issued a task order to ICGS limited to tasks such as data management and quality assurance for assets currently under contract with ICGS. The Coast Guard has no plans to award additional orders to ICGS for systems integrator functions when this task order expires in February 2011. Second, as part of its system integration responsibilities, the Coast Guard has initiated a fundamental reassessment of the capabilities, number, and mix of assets it needs to fulfill its Deepwater missions by undertaking a “fleet mix analysis.” The goals of this study include validating mission performance requirements and revisiting the number and mix of all assets that are part of the Deepwater program. According to the Coast Guard, it hopes to complete this study later this summer. Third, at the individual Deepwater asset level, the Coast Guard has improved and begun to apply the disciplined management process found in its Major Systems Acquisition Manual, which requires documentation and approval of acquisition decisions at key points in a program’s life-cycle by designated officials at high levels. However, as we reported in April 2009, the Coast Guard did not meet its goal of complete adherence to this process for all Deepwater assets by the second quarter of fiscal year 2009. For example, key acquisition management activities—such as operational requirements documents and test plans—are not in place for assets with contracts recently awarded or in production, placing the Coast Guard at risk of cost overruns or schedule slippages. In the meantime, as we reported in April 2009, the Coast Guard continues with production of certain assets and award of new contracts in light of what it views as pressing operational needs. Since the establishment of the $24.2 billion baseline estimate for the Deepwater program in 2007, the anticipated cost, schedules, and capabilities of many of the Deepwater assets have changed, in part because of the Coast Guard’s increased insight into what it is buying. Coast Guard officials stated that the original baseline was intended to establish cost, schedule, and operational requirements as a whole, which were then allocated to the major assets comprising the Deepwater program. As a result, the baseline figure did not reflect a traditional cost estimate, which generally assesses costs at the asset level, but rather the overall anticipated costs as determined by the contractor. However, as the Coast Guard has assumed greater responsibility for management of the Deepwater program, it has begun to improve its understanding of costs by developing its own cost baselines for individual assets using traditional cost estimating procedures and assumptions. As a result of these revised baselines, the Coast Guard has determined that some of the assets it is procuring may cost more than anticipated. As we reported in April 2009, information showed that the total cost of the program may grow by $2.1 billion. As more baselines for other assets are approved by DHS, further cost growth may become apparent. These cost increases present the Coast Guard with additional challenges involving potential tradeoffs associated with quantity or capability reductions for Deepwater assets. In addition, our April 2009 testimony noted that while the Coast Guard plans to update its annual budget requests with asset-based cost information, the current structure of its budget submission to Congress does not include certain details at the asset level, such as estimates of total costs and total numbers to be procured. In our previous reports on the Deepwater program, we have made a number of recommendations to improve the Coast Guard’s management of the program. The Coast Guard has implemented or is in the process of implementing these recommendations. Other management challenges associated with the Deepwater program have operational or mission performance implications for the Coast Guard. Our prior reports and testimonies have identified problems with management and oversight of the Deepwater program that have led to delivery delays and other operational challenges for certain assets— particularly (1) patrol boats and their anticipated replacements, the Fast Response Cutters and (2) and the National Security Cutters. The Coast Guard is working to overcome these issues, as discussed below. As we reported in June 2008, under the original (2002) Deepwater implementation plan, all 49 of the Coast Guard’s 110-foot patrol boats were to be converted into 123-foot patrol boats with increased capabilities as a bridging strategy until their replacement vessel (the Fast Response Cutter) became operational. Conversion of the first eight 110-foot patrol boats proved unsuccessful, however, and effective November 2006, the Coast Guard decided to remove these vessels from service and accelerate the design and delivery of the replacement Fast Response Cutters. The removal from service of the eight converted patrol boats in 2006 created operational challenges by reducing potential patrol boat availability by 20,000 annual operational hours. For example, fewer patrol boats available on the water may affect the level of deterrence provided as part of homeland security missions and reduce the Coast Guard’s ability to surge during periods of high demand, such as may occur during missions to interdict illegal drugs and undocumented migrants. To mitigate the loss of these patrol boats and their associated operational hours in the near term, the Coast Guard implemented a number of strategies beginning in fiscal year 2007. For example, the Coast Guard began using the crews from the eight patrol boats removed from service to augment the crews of eight other patrol boats, thereby providing two crews that can alternate time operating each of the eight patrol boats (i.e., double-crewing). According to Coast Guard officials, additional strategies employed by the Coast Guard that are still in use include increasing the operational hours of 87-foot patrol boats and acquiring four new 87-foot patrol boats, among others. To help fill the longer-term patrol boat operational gap, Coast Guard officials are pursuing the acquisition of a commercially available Fast Response Cutter. The first of these cutters is scheduled to be delivered in early fiscal year 2011, and the Coast Guard intends to acquire a total of 12 by early fiscal year 2013. While the contract is for the design and production of up to 34 cutters, the Coast Guard plans to assess the capabilities of the first 12 Fast Response Cutters before exercising options for additional cutters. Regarding National Security Cutters, the first vessel (National Security Cutter USCGC Bertholf) was initially projected for delivery in 2006, but slipped to August 2007 after design changes made following the terrorist attacks of September 11, 2001, and was again delayed until May 2008 because of damage to the shipyard caused by Hurricane Katrina. Based on the results of our ongoing review, the USCGC Bertholf will likely be 1 year behind schedule when it is certified as fully operational, scheduled for the fourth quarter of fiscal year 2010. Further, the eighth and final National Security Cutter was to be fully operational in 2016 but is currently projected to be fully operational by the fourth quarter of calendar year 2018. The Coast Guard has not yet acquired the unmanned aircraft and new small boats that are to support the National Security Cutters. The Coast Guard plans to draft operational specifications for the unmanned aircraft in 2010, and to acquire new small boats that are expected to be deployed with the first National Security Cutter by the end of calendar year 2010. After the unmanned aircraft is selected, the Coast Guard must contract for the acquisition and production of the unmanned aircraft, accept delivery of it, and test its capabilities before deploying it with the National Security Cutter—activities that can take several years. Delays in the delivery of the National Security Cutters and the associated support assets are expected to lead to a projected loss of thousands of anticipated cutter operational days for conducting missions through 2017, and may prevent the Coast Guard from employing the full capabilities of the National Security Cutters and the support assets for several years. Given the enhanced capabilities that the Coast Guard believes the National Security Cutters have over existing assets, a loss in operational days could negatively affect the Coast Guard’s ability to more effectively conduct missions, such as enforcement of domestic fishing laws, interdiction of illegal drugs and undocumented migrants, and participation in Department of Defense operations. To address these potential operational gaps, the Coast Guard has decided to continue to rely on its aging fleet of High Endurance Cutters and to use existing aircraft and small boats to support the National Security Cutters. However, because the High Endurance Cutters are increasingly unreliable, the Coast Guard plans to perform a series of upgrades and maintenance procedures on selected vessels. However, before this work begins, the Coast Guard plans to conduct an analysis on the condition of the High Endurance Cutters and complete a decommissioning schedule. As a result, work on the first selected High Endurance Cutter is not scheduled for completion until 2016. Until the Coast Guard has acquired new unmanned aircraft and small boats, the Coast Guard plans to support the National Security Cutters with the small boats and manned aircraft it currently uses to support the High Endurance Cutter. We will continue to assess this issue as part of our ongoing work and plan to issue a report on the results later this summer. Madam Chair and Members of the Subcommittee, this completes my prepared statement. I will be happy to respond to any questions that you or other Members of the Subcommittee may have. For information about this statement, please contact Stephen L. Caldwell, Director, Homeland Security and Justice Issues, at (202) 512-9610, or [email protected]. Contact points for our Office of Congressional Relations and Office of Public Affairs may be found on the last page of this statement. Other individuals making key contributions to this testimony include Danny Burton, Christopher Conrad, Katherine Davis, Christoph Hoashi-Erhardt, Paul Hobart, Dawn Hoff, Lori Kmetz, Ryan Lambert, David Lutter, Brian Schwartz, Debbie Sebastian, and Ellen Wolfe. This appendix provides a detailed list of performance results for the Coast Guard’s 11 statutory missions for fiscal years 2004 through 2008 (see table 4). Coast Guard: Observations on the Genesis and Progress of the Service’s Modernization Program. GAO-09-530R. Washington, D.C.: June 24, 2009. Coast Guard: Administrative Law Judge Program Contains Elements Designed to Foster Judges’ Independence and Mariner Protections Assessed Are Being Followed. GAO-09-489. Washington, D.C.: June 12, 2009. Coast Guard: Update on Deepwater Program Management, Cost, and Acquisition Workforce. GAO-09-620T. Washington, D.C.: April 22, 2009. Coast Guard: Observations on Changes to Management and Oversight of the Deepwater Program. GAO-09-462T. Washington, D.C.: March 24, 2009. Maritime Security: Vessel Tracking Systems Provide Key Information, but the Need for Duplicate Data Should Be Reviewed. GAO-09-337. Washington, D.C.: March 17, 2009. Coast Guard: Change in Course Improves Deepwater Management and Oversight, but Outcome Still Uncertain. GAO-08-745. Washington, D.C.: June 24, 2008. Coast Guard: Strategies for Mitigating the Loss of Patrol Boats Are Achieving Results in the Near Term, but They Come at a Cost and Longer Term Sustainability Is Unknown. GAO-08-660. Washington, D.C.: June 23, 2008. Status of Selected Aspects of the Coast Guard’s Deepwater Program. GAO-08-270R. Washington, D.C.: March 11, 2008. Coast Guard: Observations on the Fiscal Year 2009 Budget, Recent Performance, and Related Challenges. GAO-08-494T. Washington, D.C.: March 6, 2008. Coast Guard: Deepwater Program Management Initiatives and Key Homeland Security Missions. GAO-08-531T. Washington, D.C.: March 5, 2008. Maritime Security: Coast Guard Inspections Identify and Correct Facility Deficiencies, but More Analysis Needed of Program’s Staffing, Practices, and Data. GAO-08-12. Washington, D.C.: February 14, 2008. Maritime Security: Federal Efforts Needed to Address Challenges in Preventing and Responding to Terrorist Attacks on Energy Commodity Tankers. GAO-08-141. Washington, D.C.: December 10, 2007. Coast Guard: Challenges Affecting Deepwater Asset Deployment and Management and Efforts to Address Them. GAO-07-874. Washington, D.C.: June 18, 2007. Coast Guard: Observations on the Fiscal Year 2008 Budget, Performance, Reorganization, and Related Challenges. GAO-07-489T. Washington, D.C.: April 18, 2007. Port Risk Management: Additional Federal Guidance Would Aid Ports in Disaster Planning and Recovery. GAO-07-412. Washington, D.C.: March 28, 2007. Coast Guard: Status of Efforts to Improve Deepwater Program Management and Address Operational Challenges. GAO-07-575T. Washington, D.C.: March 8, 2007. Maritime Security: Public Safety Consequences of a Terrorist Attack on a Tanker Carrying Liquefied Natural Gas Need Clarification. GAO-07-316. Washington, D.C.: February 22, 2007. Coast Guard: Preliminary Observations on Deepwater Program Assets and Management Challenges. GAO-07-446T. Washington, D.C.: February 15, 2007. Coast Guard: Coast Guard Efforts to Improve Management and Address Operational Challenges in the Deepwater Program. GAO-07-460T. Washington, D.C.: February 14, 2007. Homeland Security: Observations on the Department of Homeland Security’s Acquisition Organization and on the Coast Guard’s Deepwater Program. GAO-07-453T. Washington, D.C.: February 8, 2007. Coast Guard: Condition of Some Aids-to-Navigation and Domestic Icebreaking Vessels Has Declined; Effect on Mission Performance Appears Mixed. GAO-06-979. Washington, D.C.: September 22, 2006. Coast Guard: Non-Homeland Security Performance Measures Are Generally Sound, but Opportunities for Improvement Exist. GAO-06-816. Washington, D.C.: August 16, 2006. Coast Guard: Observations on the Preparation, Response, and Recovery Missions Related to Hurricane Katrina. GAO-06-903. Washington, D.C.: July 31, 2006. Maritime Security: Information-Sharing Efforts Are Improving. GAO-06-933T. Washington, D.C.: July 10, 2006. United States Coast Guard: Improvements Needed in Management and Oversight of Rescue System Acquisition. GAO-06-623. Washington, D.C.: May 31, 2006. Coast Guard: Changes to Deepwater Plan Appear Sound, and Program Management Has Improved, but Continued Monitoring Is Warranted. GAO-06-546. Washington, D.C.: April 28, 2006. Coast Guard: Progress Being Made on Addressing Deepwater Legacy Asset Condition Issues and Program Management, but Acquisition Challenges Remain. GAO-05-757. Washington, D.C.: July 22, 2005. Coast Guard: Station Readiness Improving, but Resource Challenges and Management Concerns Remain. GAO-05-161. Washington, D.C.: January 31, 2005. Maritime Security: Better Planning Needed to Help Ensure an Effective Port Security Assessment Program. GAO-04-1062. Washington, D.C.: September 30, 2004. Maritime Security: Partnering Could Reduce Federal Costs and Facilitate Implementation of Automatic Vessel Identification System. GAO-04-868. Washington, D.C.: July 23, 2004. Coast Guard: Relationship between Resources Used and Results Achieved Needs to Be Clearer. GAO-04-432. Washington, D.C.: March 22, 2004. Contract Management: Coast Guard’s Deepwater Program Needs Increased Attention to Management and Contractor Oversight. GAO-04-380. Washington, D.C.: March 9, 2004. Coast Guard: Comprehensive Blueprint Needed to Balance and Monitor Resource Use and Measure Performance for All Missions. GAO-03-544T. Washington, D.C.: March 12, 2003. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The U.S. Coast Guard, a component of the Department of Homeland Security (DHS), conducts 11 statutory missions that range from marine safety to defense readiness. To enhance mission performance, the Coast Guard is implementing a modernization program to update its command structure, support systems, and business practices, while continuing the Deepwater program--the acquisition program to replace or upgrade its fleet of vessels and aircraft. This testimony discusses the Coast Guard's (1) fiscal year 2010 budget, (2) mission performance in fiscal year 2008, the most recent year for which statistics are available; and (3) challenges in managing its modernization and acquisition programs and workforce planning. This testimony is based on GAO products issued in 2009 (including GAO-09-530R and GAO-09-620T) and other GAO products issued over the past 11 years--with selected updates in June 2009--and ongoing GAO work regarding the Coast Guard's newest vessel, the National Security Cutter. Also, GAO analyzed budget and mission-performance documents and interviewed Coast Guard officials. The Coast Guard's fiscal year 2010 budget request totals $9.7 billion, an increase of 4.2 percent over its fiscal year 2009 enacted budget. Of the total requested, about $6.6 billion (or 67 percent) is for operating expenses--the primary appropriation account that finances Coast Guard activities, including operating and maintaining multipurpose vessels, aircraft, and shore units. This account, in comparing the 2010 budget request to the 2009 enacted budget, reflects an increase of $361 million (about 6 percent). The next two largest accounts in the 2010 budget request, at about $1.4 billion each, are (1) acquisition, construction, and improvements and (2) retired pay--with each representing about 14 percent of the Coast Guard's total request. The retired pay account--with an increase of about $125 million in the 2010 budget request compared to the 2009 enacted budget--is second only to the operating expenses account in reference to absolute amount increases, but retired pay reflects the highest percentage increase (about 10 percent) of all accounts. Regarding performance of its 11 statutory missions in fiscal year 2008, the Coast Guard reported that it fully met goals for 5 missions, partially met goals for 3 missions, and did not meet goals for 3 missions. One of the fully met goals involved drug interdiction. Specifically, for cocaine being shipped to the United States via non-commercial means, the Coast Guard reported achieving a removal rate of about 34 percent compared to the goal of at least 28 percent. Search and rescue was a mission with partially met goals. The Coast Guard reported that it met one goal (saving at least 76 percent of people from imminent danger in the maritime environment) but narrowly missed a related goal (saving at least 87 percent of mariners in imminent danger) by achieving a success rate of about 84 percent. For missions with unmet goals, the Coast Guard reported falling substantially short of performance targets for only one mission--defense readiness. The Coast Guard reported meeting designated combat readiness levels 56 percent of the time compared to the goal of 100 percent. The Coast Guard continues to face several management challenges. For example, GAO reported in June 2009 that although the Coast Guard has taken steps to monitor the progress of the modernization program, development of performance measures remains in the early stages with no time frame specified for completion. Also, as GAO reported in April 2009, although the Coast Guard has assumed the lead role for managing the Deepwater acquisition program, it has not always adhered to procurement processes, and its budget submissions to Congress do not include detailed cost estimates. GAO also reported that the Coast Guard faces challenges in workforce planning, including difficulties in hiring and retaining qualified acquisition personnel. Further, GAO's ongoing work has noted that delays associated with the Coast Guard's newest vessel, the National Security Cutter, are projected to result in the loss of thousands of cutter operational days for conducting missions through 2017. The Coast Guard is working to manage this operational challenge using various mitigation strategies. |
Congress passed the Employee Retirement Income Security Act of 1974 (ERISA) to protect the interests of participants and beneficiaries of private sector employee benefit plans. Before the enactment of ERISA, few rules governed the funding of defined benefit pension plans, and participants had no guarantee that they would receive promised benefits. ERISA established PBGC to insure private sector plan participants’ benefits and to encourage the continuation and maintenance of private sector defined benefit pension plans by providing timely and uninterrupted payment of pension benefits. PBGC is a wholly owned government corporation—that is, the federal government does not share ownership interests with nonfederal entities, and PBGC is subject to requirements under the Government Corporation Control Act of 1945, as amended, such as annual budgets, audits, and management reports. According to public administration experts, a government corporation is appropriate for the administration of government programs that are predominately of a business nature, produce revenue and are potentially self-sustaining, involve a large number of business-type transaction with the public, and require greater budget flexibility than a government department or agency. The United States government is not liable for any obligation or liability incurred by PBGC. The corporation is funded through insurance premiums from employers that sponsor insured pension plans, as well as assets from terminated pension plans and investment income. PBGC insures certain private sector defined benefit plans through its single-employer and multiemployer insurance programs. Through its single-employer insurance program, PBGC paid nearly $4.1 billion in benefits to 622,000 participants and beneficiaries across the United States in fiscal year 2006 (see fig. 1). The geographic breakdown of PBGC-insured participants largely matches the overall population. Appendix II includes information on PBGC’s single-employer plans by each U.S. state and territory, as shown in figure 1. PBGC is governed by a board of directors that consists of the Secretaries of the Treasury, Labor, and Commerce, with the Secretary of Labor serving as chair of the board. Prior to the passage of the Pension Protection Act of 2006, ERISA provided the Secretary of Labor with responsibility for administering PBGC’s operations, personnel, and budget. The Secretary has historically delegated the responsibility for administering PBGC to an executive director. The Pension Protection Act replaced the chair of the board as PBGC’s administrator with a Senate- confirmed director. The corporation is also aided by a seven-member Advisory Committee appointed by the President to represent the interest of labor, employees, and the general public. This committee has an advisory role, but has no statutory authority to set PBGC policy or conduct formal oversight. PBGC also has an Office of Inspector General that reports to the board through the chair. With 22 staff, the Office of Inspector General generally conducts audits, inspections, and investigations of PBGC’s programs and operations in order to promote program administration effectiveness and deter waste, fraud, and abuse of PBGC resources. PBGC’s board has taken steps to improve its governance structure by revising the corporation’s bylaws. PBGC also contracted with a consulting firm to assist the board in its review of alternative corporate governance structures. However, the board consists of three cabinet secretaries, a fact that limits their ability to provide policy direction and oversight. PBGC may also face additional challenges as the board members, their representatives, and director will all likely change with the upcoming presidential transition, thus limiting the corporation’s institutional knowledge. PBGC has taken steps to improve its policy direction and oversight through the revision of its bylaws. In our July 2007 report, we recommended PBGC’s board of directors establish formal guidelines that articulate the authorities of the board, the Department of Labor, other board members, and their respective representatives. As part of its May 2008 bylaw revision, the board of directors more clearly defined the roles and responsibilities of its members, representatives, and director. For example, the new bylaws state that the board is responsible for establishing and overseeing the policies of the corporation. The new bylaws explicitly outline the board’s responsibilities, which include approval of policy matters significantly affecting the pension insurance program or its stakeholders; approval of the corporation’s investment policy; and review of certain management and Inspector General reports. In addition, the new bylaws explicitly define the role and responsibilities of the director and the corporation’s senior officer positions. See appendix III to view PBGC’s new bylaws. Our July 2007 report also asked Congress to consider restructuring the board of directors to appoint additional members of diverse backgrounds who possess knowledge and expertise useful to PBGC’s responsibilities and can provide the attention needed for strong corporate oversight. In response to these findings, PBGC contracted with a consulting firm to review governance models and provide a background report to assist the board in its review of alternative corporate governance structures. The consulting firm’s final report describes the advantages and disadvantages of the corporate board structures and governance practices of other government corporations and select private sector companies, and concludes that there are several viable alternatives for PBGC’s governance structure and practices. Our July 2007 report found that PBGC’s board has limited time and resources to provide policy direction and oversight and has not established procedures and mechanisms to monitor PBGC operations. Although board members have met more frequently since 2003, the three cabinet secretaries who compose the board have numerous other responsibilities. Because of their responsibilities and the small size of the board, it is difficult for the board to establish and manage oversight mechanisms, such as the use of standing committees—which are common mechanisms used by both government and private corporate boards. According to board officials, the board representatives, assisted by their staff, undertake some of the oversight functions that could be conducted by standing committees. Other government corporations, such as the Federal Deposit Insurance Corporation (FDIC), the Overseas Private Investment Corporation (OPIC), and the National Railroad Passenger Corporation (Amtrak), have established standing committees to conduct certain oversight functions. For example, FDIC’s board of directors established standing committees, such as the Case Review Committee and the Audit Committee, to conduct certain oversight functions. Instead, PBGC’s board continues to rely on the Inspector General and PBGC’s management oversight committees to ensure that PBGC is operating effectively. However, our prior work found that while the board requires the Inspector General to brief it at its semiannual meetings, there were no formal protocols requiring the Inspector General to routinely meet with the board or its representatives and staff. Consequently, when the board and its representatives likely change, it is unclear whether the board would be aware of this informal protocol. Further, we reported that the board relies on PBGC’s executive committees and working groups for monitoring and reviewing PBGC’s operations. However, these committees and working groups are neither independent of the PBGC director nor required to formally report all matters to the board. PBGC may also be exposed to challenges as the board, its representatives, and director will likely change with the upcoming presidential transition in January 2009, thus limiting institutional knowledge of the challenges facing the corporation. As we noted in 2007, because PBGC’s board is composed of cabinet secretaries, PBGC board members, their representatives, and the director typically change with each administration. Other government corporations’ authorizing statutes— such as OPIC’s—have established board structures with staggered terms for their directors, possibly avoiding gaps in their organization’s institutional knowledge. PBGC management has experienced partial leadership transitions in recent years, and in anticipation of the forthcoming complete leadership change, PBGC is developing additional materials to include in its official transition package for newly appointed officials. This new information includes information on standards of ethical conduct, appointments, compensation levels, and information on presidential transitions. While PBGC typically provides newly appointed members, representatives, and directors with information on its operations and financial position, PBGC’s Office of Inspector General and our work recently identified additional financial and operational challenges facing the corporation. This additional information could help the new board members and their representatives better understand the vulnerabilities and challenges facing the corporation. Congressional oversight of PBGC in recent years has ranged from formal congressional hearings to the use of its support agencies, such as GAO, the Congressional Budget Office (CBO), and the Congressional Research Service (CRS). However, unlike some other government corporations, PBGC does not have certain reporting requirements for providing additional information to Congress. In general, our prior work has shown that congressional oversight is designed to fulfill a number of purposes, including but not limited to ensuring executive compliance with legislative intent; improving the efficiency, effectiveness, and economy of government operations; evaluating program performance; and conducting investigations. Since 2002, PBGC officials have testified 19 times before various congressional committees—mostly on broad issues related to the status of the private sector defined benefit pension policy and its effect on PBGC (see table 1). For example, in 2005, the PBGC director testified before the House Committee on Transportation and Infrastructure’s Subcommittee on Aviation regarding pension challenges facing the airline industry. The director’s testimony discussed the possible effects such defaults would have on the defined benefit pension industry and the financial position of the corporation. Congress also recently began exercising oversight of PBGC through the confirmation process of PBGC’s director. With the passage of the Pension Protection Act of 2006, PBGC’s director now must be confirmed by the Senate. During the confirmation hearing conducted in 2007, members expressed concerns about key defined benefit pension policy issues and PBGC’s financial condition, as well as sought the nominee’s thoughts on addressing weaknesses in PBGC’s governance structure, such as the concerns we raised about the corporate governance practices. Beyond formal congressional hearings, PBGC staff told us that they frequently discuss pension policy matters with congressional staff. In addition, PBGC must annually submit reports to Congress on its prior fiscal year’s financial and operational matters, which include information on PBGC’s financial statements, internal controls, and compliance with certain laws and regulations. For example, the Pension Protection Act of 2006 requires that PBGC provide a comparison of the average return on investment earned with respect to asset investments by the corporation, which PBGC includes in its annual report. Through its support agencies—GAO, the Congressional Budget Office, and the Congressional Research Service—Congress has also provided oversight and reviewed PBGC. Specifically, Congress has asked GAO to conduct assessments of policy, management, and the financial condition of PBGC. For example, we conducted more than 10 reviews of PBGC over the past 5 years, including assessments related to PBGC’s 2005 corporate reorganization and weaknesses in its governance structure, human capital management, and contracting practices. Our work also raised concerns about PBGC’s financial condition and the state of the defined benefit industry. In addition, CBO has published nine specific reports on PBGC since 2005. For example, in April 2008, CBO reported that PBGC’s investment policy is likely to produce higher returns over the long run, but noted the new strategy increases the risk that PBGC will not have sufficient assets to cover retirees’ benefit payments when the economy and financial markets are weak. Further, CRS has published eight studies related to PBGC since 2006. Appendix IV includes a list of selected GAO, CBO, and CRS reports and testimonies related to PBGC. Some government corporations have additional reporting requirements for notifying Congress of significant actions. The Millennium Challenge Corporation is required to formally notify the appropriate congressional committees 15 days prior to the allocation or transfer of funds related to the corporation’s activities. The Commodity Credit Corporation is subject to a similar requirement, which obliges the Secretary of Agriculture to alert the Committee on Agriculture, Nutrition, and Forestry of the Senate and the Committee on Agriculture of the House of Representatives prior to making adjustments to a certain price support program. The Overseas Private Investment Corporation is required to submit a detailed report to the Committee on Foreign Relations of the Senate and the Committee on Foreign Affairs of the House of Representatives at least 60 days prior to issuing, among other things, political risk insurance for losses due to business interruption for the first time. These examples demonstrate how Congress has required additional reporting requirements for certain activities conducted by government corporations. While PBGC generally has no requirements to formally notify Congress prior to taking significant financial or operational actions, PBGC officials said that they informally notify Congress prior to certain policy shifts. For example, in fiscal year 2008, PBGC officials met with congressional staff before modifying the investment policy to decrease the corporation’s fixed-income asset investments. In addition to its annual reporting requirements, PBGC is required to report proposals for certain premium rate revisions, including reasons for such revisions, to specific congressional committees; however, these premium rate revisions are not considered effective until 30 days after enactment of a law approving them. Like other government corporations, PBGC has an advisory committee. PBGC’s advisory committee is charged with advising the corporation on its policies and procedures related to the corporation’s appointment of trustees in termination proceedings, investments of monies, whether terminated plans should be liquidated immediately or continued in operation, and any other matters the corporation may request. Unlike PBGC’s advisory committee, the advisory boards or committees of other government corporations—such as the Export-Import Bank and FDIC— are subject to the Federal Advisory Committee Act and some submit formal reports to their board chair and directors (see table 2). In contrast, PBGC’s advisory committee is not subject to the Federal Advisory Committee Act. According to PBGC officials, the corporation is exempt because of the proprietary nature of its work. PBGC’s advisory committee typically reports only to the director, although representatives of PBGC’s board members frequently attend advisory committee meetings and officials said that the committee can submit concerns to the board if it believes it is warranted. Beyond reporting to the chairman of its board, the Export-Import Bank’s advisory committee is also required to submit an annual report to Congress on the extent to which the Export-Import Bank is providing competitive financing to expand U.S. exports, along with suggestions for improvements. In addition to government corporations, some government agencies with retirement-related responsibilities—such as the Social Security Administration (SSA), the Railroad Retirement Board, and the Federal Retirement Thrift Investment Board—have advisory committees as part of their governance structures, which annually report to their respective overseeing bodies. For example, when Congress established SSA as a separate and distinct agency from the Department of Health and Human Services, it also established an independent seven-member bipartisan Advisory Board to advise the President, Congress, and the commissioner of Social Security on respective policy issues. With more than 44 million Americans insured by PBGC, it is essential that the corporation is soundly governed and efficiently managed to guarantee that retirement income will be available to all those covered. Despite PBGC’s efforts to improve its bylaws, the three-member board of directors is still one of the smallest and least diverse of any government corporation. Other government corporations’ governance structures include oversight mechanisms, such as standing committees, and additional reporting requirements to conduct certain oversight functions and assist their boards of directors. While PBGC’s board should be restructured, additional reporting requirements, like some government corporations have, may not be appropriate for PBGC given the proprietary nature of its financial work; thus, any reporting changes would need to be carefully considered. The limitations of the board structure will become even more apparent in the coming months as the board, its representatives, and the corporation’s director will likely be replaced with a new presidential administration. Because board members and their representatives serve by virtue of their positions in the federal government, there is no assurance that these individuals will have the needed expertise to understand the corporation’s business or financial vulnerabilities. Without adequate information and preparation, this transition could limit not only the progress made by the current board, its representatives, and director, but may also curtail the corporation’s ability to insure and deliver retirement benefits to millions of Americans that rely on the corporation. To ensure that recently identified management and financial challenges facing PBGC are shared with those newly appointed, we recommend that PBGC provide Office of Inspector General and GAO reports on the corporation’s financial and management challenges to the newly appointed board members, board representatives, and director so that they can take appropriate action as needed. We obtained written comments on a draft report from PBGC’s director, which are reproduced in appendix V. In addition, the Departments of the Treasury, Labor, and Commerce provided joint technical comments, which were incorporated into the report where appropriate. In response to our draft report, the PBGC director stated that PBGC prepares substantial in-depth briefing materials on its operational issues for incoming administrations. The director agreed with our recommendation, stating that PBGC will ensure that the transition materials provided to those newly appointed will also include pertinent PBGC Office of Inspector General and GAO reports. Further, the director stated that PBGC will continue to work in concert with the board to provide oversight information necessary to address the important issues that they confront in providing pension security to Americans. We are sending copies of this report to the Secretaries of the Treasury, Labor, and Commerce, as well as the PBGC director and other interested parties. We will also make copies available to others on request. If you or your staff has any questions concerning this report, please contact me on (202) 512-7215 or [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. To identify the steps that the Pension Benefit Guaranty Corporation (PBGC) has taken to improve its governance structure, we reviewed our work issued in July 2007, as well as collected and reviewed documents related to PBGC’s bylaws, which were last published in May 2008. We reviewed reports on PBGC’s organizational structure and financial condition. We also identified provisions of the Employee Retirement and Income Security Act of 1974, the Pension Protection Act of 2006, and chapter 91 of the U.S. Code, which is commonly known as the Government Corporation Control Act (GCCA), that outline the authority of PBGC’s board of directors as well as the administrative responsibilities of PBGC’s director. To understand the board of directors’, their representatives’, and PBGC’s director’s role, we reviewed documentation related to the board members’ activities to identify what types of actions the board members had considered and taken. To determine how Congress exercises oversight of PBGC, we identified the number of times PBGC officials testified before Congress since 2002, and reviewed the issues discussed at each formal hearing. Further, we reviewed the work of the Congressional Budget Office, the Congressional Research Service, PBGC’s Office of Inspector General, and our work on PBGC’s financial and management challenges. To determine the oversight mechanisms and reporting requirements that exist at other government corporations, we collected information on select federal government corporations that we identified in our July 2007 work, which are listed under the Government Corporation Control Act of 1945, as amended, and have similar missions or designations to those of PBGC. We reviewed information on the following government corporations: Commodity Credit Corporation, Export-Import Bank of the United States, Federal Crop Insurance Corporation, Federal Deposit Insurance Corporation, Federal Financing Bank, Federal Prison Industries (UNICOR), Financing Corporation, Government National Mortgage Association, Millennium Challenge Corporation, National Railroad Passenger Corporation (Amtrak) Overseas Private Investment Corporation, Resolution Funding Corporation, Saint Lawrence Seaway Development Corporation, Tennessee Valley Authority, and United States Postal Service. We also reviewed government agencies with retirement-related responsibilities to determine what other oversight mechanisms may exist; the agencies include the Social Security Administration, the Railroad Retirement Board, and the Federal Retirement Thrift Investment Board. Moreover, we met with officials from PBGC and the Department of Labor. Pension Benefit Guaranty Corporation: Some Steps Have Been Taken to Improve Contracting, but a More Strategic Approach Is Needed. GAO-08-871. Washington, D.C.: August 2008. PBGC Assets: Implementation of New Investment Policy Will Need Stronger Board Oversight. GAO-08-667. Washington, D.C.: July 2008. Pension Benefit Guaranty Corporation: A More Strategic Approach Could Improve Human Capital Management. GAO-08-624. Washington, D.C.: June 2008. High Risk Series: An Update. GAO-07-310. Washington, D.C.: January 2007. Pension Benefit Guaranty Corporation: Governance Structure Needs Improvements to Ensure Policy Direction and Oversight. GAO-07-808. Washington, D.C.: July 6, 2007. PBGC’s Legal Support: Improvement Needed to Eliminate Confusion and Ensure Provision of Consistent Advice. GAO-07-757R. Washington, D.C.: May 18, 2007. Private Pensions: Questions Concerning the Pension Benefit Guaranty Corporation’s Practices Regarding Single-Employer Probable Claims. GAO-05-991R. Washington, D.C.: September 9, 2005. Private Pensions: The Pension Benefit Guaranty Corporation and Long- Term Budgetary Challenges. GAO-05-772T. Washington, D.C.: June 9, 2005. Pension Benefit Guaranty Corporation: Single-Employer Pension Insurance Program Faces Significant Long-Term Risks. GAO-04-90. Washington, D.C.: October 2003. Pension Benefit Guaranty Corporation Single-Employer Insurance Program: Long-Term Vulnerabilities Warrant ‘High Risk’ Designation. GAO-03-1050SP. Washington, D.C.: July 23, 2003. Pension Benefit Guaranty Corporation: Statutory Limitation on Administrative Expenses Does Not Provide Meaningful Control. GAO-03-301. Washington, D.C.: February 2003. GAO Forum on Governance and Accountability: Challenges to Restore Public Confidence in U.S. Corporate Governance and Accountability Systems. GAO-03-419SP. Washington, D.C.: January 2003. A Review of the Pension Benefit Guaranty Corporation’s New Investment Strategy. April 24, 2008. Effect of H.R. 2830 on the Net Economic Costs of the Pension Benefit Guaranty Corporation. December 29, 2005. The effect on the 10-year net costs to the Pension Benefit Guaranty Corporation (PBGC) of enacting S. 1783, the Pension Security and Transparency Act of 2005. October 11, 2005. A Guide to Understanding the Pension Benefit Guaranty Corporation. September 2005. The Risk Exposure of the Pension Benefit Guaranty Corporation. September 2005. Testimony on Multiemployer Pension Plans. June 28, 2005. Testimony on the Pension Benefit Guaranty Corporation: Financial Condition, Potential Risks, and Policy Options. June 15, 2005. Testimony on Estimating the Costs of the Pension Benefit Guaranty Corporation. June 9, 2005. Testimony on Defined-Benefit Pension Plans: Current Problems and Future Challenges, June 7, 2005. Baird Webel. Insurance Guaranty Funds. RL32175. February 27, 2008. John J. Topoleski. Pension Benefit Guaranty Corporation: A Fact Sheet. 95-118. January 29, 2008. Patrick Purcell. Summary of the Pension Protection Act of 2006. RL33703. May 1, 2007. William Klunk. The Pension Benefit Guaranty Corporation and the Federal Budget. RS22650. April 24, 2007. William Klunk. The Financial Health of the Pension Benefit Guaranty Corporation (PBGC). RL33937. March 23, 2007. Jennifer Staman and Erika Lunder. The Pension Benefit Guaranty Corporation and Single-Employer Plan Terminations. RS22624. March 14, 2007. Jennifer Staman and Erika Lunder. Pension Protection Act of 2006: Summary of the PBGC Guarantee and Related Provisions. RS22513. December 20, 2006. Neela K. Ranade and Paul J. Graney. Defined Benefit Pension Reform for Single-Employer Plans. RL32991. January 26, 2006. The following team members made key contributions to this report: Blake Ainsworth, Assistant Director; Jason Holsclaw, Analyst-in-Charge; Susannah Compton; William King; Matthew Lee; Charlie Willson; and Craig Winslow. | The Pension Benefit Guaranty Corporation (PBGC) insures the pension benefits of 44 million private sector workers and retirees in over 30,000 employer-sponsored pension plans. In July 2007, GAO reported that PBGC's governance structure needed improvements, and asked Congress to consider expanding the board of directors to include additional members. GAO also recommended that the board develop policies and mechanisms consistent with corporate governance practices, and develop formal guidelines to clarify the roles and responsibilities of the board chair, members, their representatives, and the director. On the basis of that work, this report addresses (1) the steps PBGC has taken to improve policy direction and oversight and (2) how Congress applies oversight to PBGC and what other oversight mechanisms exist for government corporations. GAO reviewed PBGC's new corporate bylaws and the structure and reporting requirements of selected government corporations. GAO also interviewed PBGC and Department of Labor officials. Although PBGC's board has strengthened the corporation's governing bylaws, the three-member board of directors is still limited in its ability to provide policy direction and oversight to PBGC. In implementing our earlier recommendation, the board revised the corporation's bylaws to more clearly define the roles and responsibilities of PBGC's board members, representatives, director, and senior management. PBGC also contracted with a consulting firm to provide a background report to assist the board in its review of alternative corporate governance structures, including restructuring the board of directors as GAO suggested in 2007. However, because of its small size, the board has not been able to develop procedures and mechanisms to monitor PBGC's operations, such as standing committees, which are mechanisms used by other government corporations. PBGC may also be exposed to challenges as the board, its representatives, and the director will likely change with the upcoming presidential transition in January 2009. While PBGC management has experienced a partial leadership change in recent years and provides operational and financial information to those newly appointed, PBGC Inspector General and GAO reports have recently identified additional financial and operational challenges facing the corporation. This additional information could help the new board members better understand the vulnerabilities and challenges facing the corporation. PBGC is subject to routine congressional oversight, but certain other government corporations have other types of reporting requirements in place--such as congressional notifications and reporting protocols for their advisory committees--to ensure effective communication exist between the corporations and Congress. Congressional oversight of PBGC in recent years has ranged from formal committee hearings to investigations and studies conducted by its congressional support agencies. For example, since 2002, PBGC officials have testified 19 times before several different committees on issues such as the status of its financial condition. Further, GAO, the Congressional Budget Office, and the Congressional Research Service have issued a variety of reports and testimonies on PBGC financial and operational matters. However, PBGC does not have reporting requirements applied to other government corporations for providing additional information to Congress. For example, the Millennium Challenge Corporation and the Commodity Credit Corporation are required to notify Congress prior to conducting certain financial transactions. The advisory committee of the Federal Deposit Insurance Corporation formally reports to its board of directors, while the Export-Import Bank of the United States' advisory committee formally reports to its board and Congress each year on matters related to their respective organizations. In addition, the advisory boards of other government entities with retirement-related responsibilities--such as the Social Security Administration, the Railroad Retirement Board, and the Federal Retirement Thrift Investment Board--provide reports to their overseeing bodies. |
SSA currently maintains over 800 agreements that support exchanges through which the agency provides data to state and federal partners (approximately 700 state and almost 100 federal agency partners). These agreements define the requirements, terms, and conditions under which the data will be provided to the partners. In many cases, legislation mandates the agency to provide electronic data in support of certain programs. For example, SSA is required to provide data to the Veterans Benefits Administration, which is part of the Department of Veterans Affairs, in support of the Veterans Benefits Administration’s efforts to determine benefits eligibility. In other cases, SSA voluntarily enters into data exchange agreements with state and other federal agency partners. SSA’s existing IT infrastructure—databases, applications, networks, and management practices—supports the agency’s daily operations and core mission activities, such as administering monthly Retirement, Survivors, and Disability Insurance and Supplemental Security Income benefits, as well as its data exchange programs. Data provided to SSA’s data exchange partners are accessed from the same database infrastructure that supports the agency’s overall operations and mission. This infrastructure includes several databases that store and maintain beneficiary data related to SSA’s benefits programs. These databases are briefly described in table 1. Master Beneficiary Record (MBR) Master Earnings File (MEF) Supplemental Security Record (SSR) Number Holder Identification (Numident) Among the databases, the Master Beneficiary Record, Master Earnings File, and Supplemental Security Record databases are included in the agency’s legacy database infrastructure, referred to as the Master Data Access Method, or MADAM. The Number Holder Identification has recently been updated to a mor e modern system based on commercially available software. SSA also maintains various application systems that were designed and developed specifically to process data exchange transactions and that generally support multiple data exchanges. These applications provide Social Security number matching and verification information that supports various types of federal and state programs, such as driver’s license issuance, voter registration, social services administration (e.g., food stamps), employment eligibility verification, and passport issuanc e. The applications accept requests for information from SSA’s partners, retrieve and attempt to match or verify data from the databases against the information submitted by the partners, and then transmit responses—that is, the results of the data matching and verification—to the data excha nge partners who initiated the requests. The applications primarily match Social Security number, name, and date-of-birth information submitted by the partners against data stored in the databases. The request transactions are processed and the responses pro file or online, real-time processing. vided through the use of either batch SSA relies on 11 key application systems to process information reques from and provide response data to its state and federal data exchange partners. These data exchange applications provide data to programs associated with 582 of the state-level agreements and 26 of the feder level agreements—about 75 percent of the outgoing data exchange agreements that the agency supports. These 11 key data exchang e application systems process over 1 billion transactions annually. SSA’s outgoing data exchange programs are funded through various sources. These include the agency’s annual budget and reimbursements from the data exchange partners, which are established through the data exchange agreement process. For example, the agreement between SSA and the Veterans Benefits Administration states that SSA will provide data at no cost. In this case, SSA’s annual budget provides funds to support the administrative and systems-related activities that are required to support this agreement, such as the development of agreements and technical support for the data exchange partners. SSA receives reimbursements from a number of its external partners, including the American Association of Motor Vehicle Administrators (AAMVA) along with 2 state agencies and 14 other federal agencies, for data exchanges. For example, according to agency documentation, SSA’s agreement with AAMVA to provide electronic Social Security number verifications to all states for processing driver’s license applications resulted in reimbursements in the amount of $230,882 to the agency in 2007. The amounts of reimbursements, if any, are determined on the basis of provisions established by the agreements with each data exchange partner. For 364 of these agreements, SSA provides electronic data at no cost to state and federal partners. According to agency documentation, during fiscal year 2007, SSA received about $10 million in total reimbursements for its outgoing data exchanges, primarily from federal data exchange partners. The documentation indicates that about half of this amount was reimbursed on the basis of agreements with the Departments of Homeland Security and Health and Human Services. The management and oversight of SSA’s data exchange programs span multiple agency components. Among these, the Associate Commissioner for the Office of Earnings, Enumeration and Administrative Systems, has primary responsibility for systems activities related to SSA’s data exchange programs, and the Office of Budget, Finance, and Management, Office of Strategic Services has primary responsibility for data exchange agreements. We have previously reported on SSA’s IT management practices and challenges. Most recently, we studied and reported on the agency’s ability to effectively deliver services, including those related to electronic data exchanges. In our reports, we made recommendations directed toward the agency’s need to strengthen its plans for delivering services to beneficiaries and to address challenges associated with a growing and increasingly complex data exchange environment. In December 2008, we reported specifically on SSA’s data exchange environment and identified challenges that the agency faced in supporting data exchange programs. Among these, we noted that the agency faced challenges in retaining the expertise needed and maintaining the technology required to support an adequate technical infrastructure to meet future needs. We also described an agency initiative to study ways to better manage the data exchange environment and address current and future challenges and limitations. This initiative identified actions that the agency should take to address challenges related to management and systems-related issues. We recommended in our report that SSA set milestones for undertaking these actions. The Commissioner of Social Security concurred with our recommendation and stated that the agency had established milestones for taking action. Also in January 2009, we reported that increases in retirement and disability filings, along with ongoing and expected increases in retirements of SSA’s most experienced staff, posed difficult challenges for the agency in meeting future service delivery needs. We recommended that the agency take steps to address these challenges and develop a plan that describes how it will deliver quality service in the future while managing growing work demands and constrained resources. In response, SSA stated that it had intensive planning efforts in place, but agreed to develop a single planning document that would describe service delivery and staffing plans. SSA has implemented technology and procedures within its existing IT infrastructure to support its current data exchange environment, including processes to identify, track, and resolve systems-related problems. Those data exchange partners we included in our study indicated that the agency’s IT infrastructure had effectively supported the exchanges in which they participate. The partners reported that they had experienced few systems-related problems and none that had a significant impact on their ability to conduct business. An effective IT infrastructure provides performance capabilities, such as system availability and reliable data, that are essential to enabling the efficient and economical exchange of electronic data. The Software Engineering Institute defines practices for providing effective IT services, including those that are key to an agency’s ability to ensure that its infrastructure supports business operations. These practices include, among other things, the establishment of (1) monitoring capabilities to proactively identify problems; (2) help desks to collect information on incidents and initiate problem-solving actions; and (3) capabilities to prioritize incidents, track the status and progress of incident resolution, and validate the complete resolution of incidents. By implementing practices to monitor, track, and resolve systems-related problems, agencies can reduce the risk that when problems occur, their information systems and support mechanisms will fall short of effectively supporting business operations and service commitments. To ensure that SSA’s IT infrastructure meets the performance requirements of the agency’s data exchange partners, SSA and its partners establish, through data exchange agreements, specific and detailed performance standards (e.g., system availability, scheduled outages, data accuracy, and response times for each exchange, along with protocols for reporting and resolving systems-related problems). These elements are critical to an effective IT infrastructure that enables the efficient and economical exchange of electronic data. For example: The data exchange agreement with AAMVA specifies that SSA will provide responses for 95 percent of all Social Security Online Verification system requests received from AAMVA within 3 seconds or less and that responses to 99 percent of requests will be provided within 5 seconds or less. The agreement also defines the hours of system availability and scheduled outages. The data exchange agreement with the Veterans Benefits Administration specifies that SSA is to provide a 99 percent data accuracy rate in support of the administration of veterans’ benefits. To address any systems-related problems that might occur and affect its data exchange partners, SSA has implemented several key practices for providing effective IT services, as defined by the Software Engineering Institute. Specifically, the agency has established processes to identify, track, and resolve systems-related problems. For example, the agency established a national help desk for responding to systems-related problems, and employs a process for recording, prioritizing, tracking, and solving IT problems. This process is supported by the Change Asset and Problem Reporting System, which is a system that is used agencywide to prioritize systems-related problems, track the status and progress of resolutions, and validate the resolution of problems. System reports from January 2008 to February 2009 identified approximately 35 systems-related problems that affected data exchanges. These problems were primarily temporary outages lasting less than 1 minute, including several problems lasting just seconds. Additionally, network support personnel proactively monitor data exchange components through the agency’s full-time (24 hours a day, 7 days a week) network operations and monitoring centers, which provide support for agencywide IT infrastructure components. Other SSA support staff also regularly monitor systems resource usage and work with state and federal partners to help resolve problems as they occur. For example, according to SSA officials, if a partner notifies the agency of slow response time, support staff members monitor the partner site’s bandwidth usage and make upgrades as necessary. SSA has established other specific procedures for speedy response to resolve systems-related problems. For example, according to AAMVA partners, SSA’s network operations and monitoring centers are responsible for monitoring the performance of the data exchange system that is used to support driver’s license issuance and voter registration (the Social Security On-Line Verification system). When SSA personnel at the centers receive an alert indicating that the system is not performing properly, they investigate the problem and assign responsibility for resolving the problem to the appropriate SSA unit. Responsible personnel then notify AAMVA of the problem and the status of the resolution process, which is updated through the Change Asset and Problem Reporting System. Reports from the reporting system show that, in one case of a system outage, SSA support staff resolved the problem in 34 seconds by moving the AAMVA network connection to a backup system. All of the federal and state data exchange partners included in our study stated that SSA’s IT infrastructure adequately supported the performance standards established by existing agreements and effectively provided data that supported their business operations. They agreed that SSA was responsive and quickly resolved problems when they did occur and stated that, as a result of SSA’s efforts, their ability to conduct business operations that depend on data provided by the agency had not been adversely affected by systems-related problems associated with SSA’s IT infrastructure. These partners, which had been receiving electronic data from SSA for 2 to more than 30 years, reported that they experienced no or only minor systems-related problems caused by SSA’s IT infrastructure. For example: AAMVA partners reported that the SSA system that supports their data exchange historically was fully available during the hours of system availability specified in their exchange agreement. According to these officials, the system experienced virtually no downtime through calendar year 2008—it was operational 99.8 percent of the time that the system was available. Officials with Idaho’s Departments of Labor and Transportation told us that SSA’s problem resolution procedures and support staff were effective and provided ample support and timely response to reported problems. These officials added that while 4 percent of Social Security number verification responses that they received from SSA via the AAMVA system were not confirmed, these nonverifications were predominantly caused by name changes that were not updated by the Social Security number holders (e.g., changes from maiden names to married names), rather than by problems with SSA’s data exchange systems or data. Officials with the Iowa state agencies stated that SSA’s systems performance was satisfactory. Specifically, officials with the state’s Department of Transportation, which receives data through the AAMVA network, reported that although they occasionally experienced unscheduled outages and slow system response times, SSA’s overall performance was satisfactory. Officials with the state’s Department of Human Services said that they had not experienced any problems. Officials with the eight California agencies included in our study reported no problems associated with SSA’s data exchange systems. Beyond these reports, the two SSA regional office data exchange coordinators with whom we spoke stated that state and territory agency partners within their regions, New York City and Kansas City, had not reported systems-related problems associated with SSA’s existing IT infrastructure. Specifically, the Kansas City coordinator stated that the region had not reported any problems with data exchange systems since the coordinator came on board in September 2008. Similarly, the federal data exchange partners with whom we met did not report any problems associated with SSA’s IT infrastructure that negatively affected their ability to carry out business operations. For example: Data exchange officials with the Department of Homeland Security reported as of June 2009 that no systems-related problems had affected their ability to conduct business. Veterans Benefits Administration and Veterans Health Administration officials also stated that SSA’s IT infrastructure was effective in delivering data that supported their benefits administration programs and that they had experienced no problems with the quality or delivery of data from SSA’s systems. Officials with the Centers for Medicare and Medicaid Services stated that they occasionally experienced late delivery of files that delayed the processing of Medicare entitlement information. However, these officials described this matter as a situation that they considered to be within the norm, given the very large amounts of data that are being exchanged between systems. These partners did not view these occasional late deliveries to be a significant issue with the SSA systems that provided data. Although SSA’s existing IT infrastructure is sufficient to support current outgoing data exchanges, SSA officials and the agency’s partners anticipate that the number of these exchanges will continue to increase and become more complex, placing greater demands on the infrastructure and systems. To address overall agency needs for a more cost-effective and efficient computing environment, SSA is currently taking steps to modernize its IT infrastructure, including components that support its data exchange programs. For example, the agency is updating its 30-year-old database infrastructure, converting outdated software applications, and expanding its physical data processing capacity. However, the agency has not established and executed IT management practices needed to effectively guide and oversee the direction of its outgoing electronic data exchange programs, such as conducting the analyses required to project future workload needs and performance requirements—information that is essential to developing a target architecture that identifies business and technical requirements for a future data exchange environment. If these analyses are not completed, SSA’s ability to provide and maintain an IT infrastructure that meets requirements to effectively support its data exchange programs in the future could be at risk. Based on recent increased numbers of requests for data provided by data exchange services, both SSA and its data exchange partners anticipate that the agency’s outgoing data exchange programs will continue to grow and will outpace current capabilities. SSA reported that from fiscal years 2007 to 2008, the number of data requests made to seven of SSA’s key data exchange applications increased from 1.19 billion to 1.37 billion, or about 15 percent. Table 4 shows details of the increase in data requests for these applications. An example of one data exchange application that has been required to process large increases in data requests for Social Security number confirmations is SSA’s E-Verify application. This application supports the Department of Homeland Security E-Verify program, which is used to help employers verify the employment eligibility of newly hired workers. Participation in the program has been voluntary at the private and state levels since its implementation 10 years ago and, since then, utilization of SSA’s E-Verify application has increased dramatically. In the past 2 years alone, usage has doubled twice. Further, federal legislation has been proposed to, among other things, require the use of the E-Verify program by employers across the nation. If such legislation is enacted, agency officials estimate that the number of queries to E-Verify could quickly surpass 60 million per year—nearly 10 times the number of requests in fiscal year 2008. In addition to growth in the numbers of data exchange requests, the complexity of the exchanges is expected to increase, in that agencies are increasingly asking for online access (a more complex requirement than batch access). Officials from four of the state and federal agencies with whom we spoke stated that their needs for online access to SSA’s data exchange systems are expanding. For example, officials with New York, Idaho, and California reported that their programs will need expanded online access to SSA’s data exchange systems to support increasing workloads and to provide more efficient processing of data requests. Specifically, officials with New York’s Office of Temporary and Disability Assistance explained that their programs will be requiring increased online or Web-based service from SSA in the future to support expected general workload growth as their need for Social Security number verifications and other SSA information grows. Officials with California’s Department of Child Support Services stated that online access is needed to improve efficiencies in caseworkers’ ability to verify benefits, schedule court hearings to determine amounts of child support payments, and process and close eligible cases. Officials with Idaho’s Department of Labor also projected that they would need more online access to SSA’s data exchange systems through Web-based transaction processing to support more efficient case processing and access to SSA’s data exchange systems 24 hours a day, 7 days a week. In 2008, SSA officials polled the agency’s 10 regional offices to identify current data exchange partners that had requested online, real-time access to the agency’s State Online Query system over the previous 24 months. The poll identified state partners in 8 of the regions that had made such requests. Table 5 shows these regions along with the 24 state agencies that had requested online access for their programs (in addition to the 50 partners that already had access to the State Online Query system). Office of Management and Budget Circular A-130 and industry best practices stress the importance of agencies taking advantage of cost- effective technology to improve operations. Additionally, industry studies have shown that modern technology can offer organizations more efficient and effective automation capabilities to meet service delivery demands. Further, more advanced technology is important to support online, real- time transaction processing, which is more demanding than batch processing due to the need for increased systems availability and more sophisticated technology. SSA officials are taking steps to update key components of the agency’s IT infrastructure, including those that directly support its data exchange services, to provide expanded and extended processing capabilities. In particular, the agency is in the process of modernizing the agency’s database infrastructure, upgrading software, and building new data centers. According to an SSA Senior Enterprise Architect, the agency’s legacy database infrastructure—MADAM—was created in-house in the early 1980s and, over time, became outdated and difficult to support, limiting the agency’s ability to provide expanded data processing services. For example, the MADAM databases must be backed up and maintained daily and are not available during the time that maintenance occurs. This down time prevents the agency from providing complete data processing services 24 hours a day, 7 days a week to support the agency’s core mission and operations as well as its data exchange programs. Thus, SSA is in the process of converting its MADAM environment to a modern and commercially available system that is intended to support online processing 24 hours a day, 7 days a week. According to SSA officials, the agency has already converted its Numident database, and plans to convert its Master Earnings File, Supplemental Security Record, and Master Beneficiary Record databases in 2009, 2011, and 2012, respectively. Further, SSA is in the process of updating its E-Verify data exchange operating environment. In this regard, the agency is implementing an environment in which E-Verify data requests will be processed against a dedicated database and will provide continuity of operations and disaster recovery capabilities, which are currently not available. These upgrades are intended to support the projected increase in demand for E-Verify services and reduce the risks of system slowdowns and disruptions. The E-Verify upgrades are expected to be completed and implemented in August 2009. The agency is also upgrading its current systems environment, including the systems that support data exchanges. This environment contains aging software that is based on about 36 million lines of COBOL code, a programming language that is generally viewed as obsolete and that makes it difficult to implement new business processes and new service delivery models, such as online, real-time processing. According to SSA officials, the agency is upgrading its software applications to Web-based technology that is intended to better enable online, real-time access to data processing services. Further, SSA has recently built a new data center to provide expanded data processing capacity in addition to that provided by its National Computer Center. The agency also has plans to replace the 30-year-old National Computer Center to, according to SSA officials, provide more efficient and economical processing capabilities and support growing requirements of a full-time electronic service delivery operation. By taking these steps to modernize its agencywide IT infrastructure, including components that support data exchange programs, SSA is aiming to provide the computing capabilities needed to support increasing and future demands for electronic data exchange services. SSA’s IT strategic plan reflects the priorities that are intended to guide the agency’s operational and tactical IT planning through fiscal year 2012. Regarding data exchanges, the plan discusses an agency initiative that is intended to study various data exchanges and ways to allow partners to link more efficiently to the agency’s systems. Also, the agency’s IT vision document describes plans to replace various methods of sharing data with an easy-to-use, Web-based portal that provides data to partners through a menu-driven system, along with a plan to use a single registration process and a secure, controlled environment that ensures data are protected. The vision document identifies as a key initiative the need to maintain a robust data exchange architecture that fully supports the growing demand for information sharing. Even as SSA proceeds with planned updates to its IT infrastructure, the agency has not fully implemented IT management practices that are essential to help guide its direction related to data exchanges. Sound management practices require organizations to perform the necessary planning for investments to ensure that they effectively support current and future business needs, such as a more demanding data exchange environment. In planning for future business operations, agencies should also, among other things, project their programs’ anticipated workloads, such as increases in data requests and transaction volumes—information that is essential for making informed decisions concerning workload management and the technological solutions needed to sustain efficient and effective performance in the future. Further, this information can provide critical input to the agency’s planning efforts, including the development of a target architecture. Nevertheless, according to agency officials, SSA has not conducted detailed analyses to project future workload requirements resulting from the increasing demand and expanded need for outgoing electronic data exchanges. Specifically, the agency has not projected increases in the number of requests for data or the need for more online, real-time access to its data exchange systems. The agency’s Director of the Division of Information, Verification and Exchange Services and the Director of the Office of Systems Security Operations Management maintain that the agency’s tactical plans for delivering electronic services will be sufficient to address future needs for data exchanges. However, these plans have not yet been developed. Further, while the agency’s strategic direction and vision for its data exchange environment are important, they do not substitute for the more detailed analysis that is essential to identifying the specific business and technical requirements for its data exchange programs and partners. Moreover, in the absence of such detailed analysis, SSA cannot be assured that it will achieve the robust data exchange architecture that it envisions. According to federal guidance, an agency should develop an enterprise architecture, or modernization blueprint, that describes in both business and technology terms how it operates today and how it intends to operate and support projected needs in the future—that is, its target architecture. Federal guidance states that enterprise architectures should identify the data that are to be exchanged, the frequency and nature of the exchanges, and the business processes supported by the data exchanges. For SSA to ensure that its target architecture provides for a future IT infrastructure that will support its expanding data exchange environment, the agency should conduct the analyses needed to project the numbers of data exchange transactions along with the frequency and nature of the data exchanges (that is, whether the transactions are batch or online, and how frequently they will be conducted) expected from its partners in the future. Agency officials have recognized the need for a target architecture that addresses requirements for data exchanges. For example, in 2008, an SSA workgroup conducted an analysis to address management and technical issues related to its data exchanges, which was intended to support planning for its growing data exchange program. Among other things, the workgroup recommended to the Commissioner of Social Security that the agency devote sufficient resources to develop a well-defined target architecture that is sufficiently scalable to meet the agency’s future needs for supporting data exchanges. However, while SSA has developed a target architecture describing applications and service components that are used to support the agency’s business operations, according to the Directors of the Division of Information, Verification and Exchange Services, and of the Office of Strategic Services, the agency has not developed a target architecture that addresses specific business and technical requirements for supporting the agency’s data exchange programs. Moreover, these directors stated that the current agencywide target architecture does not specifically address data exchanges with external organizations. Lacking sound projections of future workloads, SSA will not be able to clearly define specific requirements for meeting the increasing demands on its data exchange environment. Moreover, without a target architecture that addresses specific requirements for supporting the agency’s data exchange programs, the agency cannot be assured that its IT infrastructure will provide the resources and levels of service required to meet the future needs of its data exchange partners. Consequently, SSA’s ability to continue to effectively and efficiently support its partners’ future needs for electronic data exchange services could be jeopardized. SSA’s existing IT infrastructure effectively supports its current outgoing electronic data exchange environment and provides the agency’s partners with data that support and enhance their abilities to carry out business operations. The state and federal partners included in our study experienced few or no problems associated with the agency’s IT infrastructure. However, the agency and its partners anticipate that the demand for these exchanges will grow, and that the methods for conducting the exchanges will become more complex. Nonetheless, SSA has not performed the detailed analyses needed to project the workload and performance requirements of a future data exchange environment. While it has defined an agencywide target architecture, this architecture does not address specific business and technical requirements for supporting the agency’s data exchange programs. Until it conducts these analyses, the agency will lack information essential to developing a target architecture for an IT infrastructure that effectively supports the ability of data exchange partners to carry out their business operations in the future. To help ensure that SSA’s IT infrastructure effectively supports the anticipated increase in demand for electronic data exchange services, we recommend that the Commissioner of Social Security direct the Associate Commissioner, Office of Earnings, Enumeration and Administrative Systems to conduct detailed analyses to determine workload projections and define requirements for effectively and efficiently delivering data exchange services to the agency’s partners in the future and use the results of these analyses to update the agency’s target architecture to address business and technical requirements of a future data exchange environment. The Commissioner of Social Security provided written comments on a draft of this report. In the comments, the agency agreed with our recommendations and stated that it would conduct detailed analyses to determine workload projections and define future requirements for delivering data exchange services as funding is available. Further, the Commissioner stated that it would use the results of the analyses to update the agency’s target architecture to ensure that it addresses the business and technical requirements of a future data exchange environment. If these actions are taken, the agency should be better positioned to meet the growing needs of its data exchange partners. SSA also provided technical comments, which we have incorporated into the report as appropriate. The agency’s written comments are reproduced in appendix II. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies of the report to interested congressional committees, the Commissioner of Social Security, and other interested parties. This report will also be available at no charge on our Web site at http://www.gao.gov. Should you or your staff have questions on matters discussed in this report, please contact me at (202) 512-6304 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Our objectives were to (1) determine the extent to which the Social Security Administration’s (SSA) information technology (IT) infrastructure effectively and efficiently supports its current data exchange programs, and describe any systems-related problems affecting the agency’s data exchange partners and (2) describe SSA’s efforts to ensure that its IT infrastructure can support the agency’s and its partners’ future data exchange environment. To address both of these objectives, we focused our study on SSA’s data exchanges that provide information to support state and other federal agencies’ programs, and that affect partners’ abilities to provide services to individuals—that is, “outgoing” data exchanges. To determine whether SSA’s existing IT infrastructure is effective in supporting the agency’s data exchange programs, we first identified the components of the infrastructure that support the agency’s data exchange programs. To do this, we obtained and analyzed agency documentation, such as internal management reports; spreadsheets describing data exchange programs, partners, and systems; descriptions of systems development and IT management processes; and descriptions of IT infrastructure components. To assess the reliability of the data provided by SSA, we held discussions with agency officials who were knowledgeable about the agency’s data exchange infrastructure and determined that the data were sufficiently reliable for the purposes of our engagement. From our assessment of this information, we identified and selected systems that are key to enabling the exchange of electronic data—that is, those that support the largest number of outgoing data exchanges, are used by the most partners, and process the largest number of data requests. We verified our selection through discussions with the Director, Division of Information, Verification and Exchange Services. We held discussions with additional agency officials, including the Associate Commissioner, Office of Earnings, Enumeration and Administrative Systems; Associate Commissioner, Office of Systems, Electronic Services; Assistant Associate Commissioner, Office of Telecommunications and Systems Operations; Director, Office of Systems Security Operations Management; Director, Office of Electronic Information Exchange; and Director, Office of Strategic Services. We also reviewed documentation describing the agency’s plans to make improvements to its IT infrastructure and discussed these plans with agency officials. Additionally, we interviewed state and federal data exchange partners to obtain their views on the ability of SSA’s IT infrastructure to provide data that support their business operations. To determine the extent to which systems-related problems exist and affect SSA’s data exchange partners and their ability to carry out business operations, and how the agency resolves these problems, we selected and obtained the views of partners from multiple agencies in five states, a third-party consortium that represents states and territories that participate in two of SSA’s data exchange programs, and four federal agency partners. Together, these state and federal partners represented 24 programs that are supported by SSA’s data exchanges. Our criteria and methodology for selecting these partners are described later in this appendix. We discussed with these partners their experiences with SSA’s data exchanges, including the extent to which their ability to conduct business operations are affected by systems-related problems associated with the agency’s IT infrastructure. We reviewed and assessed documentation provided to us that described the types of systems-related problems that partners encountered in exchanging data. We also interviewed two SSA regional office data exchange coordinators to discuss any systems-related problems identified or reported to them by the partners they support, and to obtain information about programs that together fully utilize the IT infrastructure components that support the data exchange environment. To describe SSA’s efforts to ensure that its IT infrastructure can support the agency’s and its partners’ future data exchange environment, we held discussions with the agency officials and state and federal partners that we have previously described and examined data provided by SSA that illustrated increased transaction volume from 2007 to 2008. We also obtained and reviewed documentation that described increased requests for online access to certain data exchange systems over the past 24 months. Additionally, we reviewed relevant SSA strategic planning documents, including its IT strategic plan and vision document to identify plans related specifically to data exchanges. We assessed these plans to identify activities specifically related to data exchanges and to determine the extent to which they addressed projected data exchange workloads and the IT resource requirements for efficiently and effectively supporting future requirements. In addition, we interviewed SSA officials and data exchange stakeholders familiar with the agency’s data exchange programs to identify the agency’s plans for addressing future data exchange workloads, including ensuring that its IT infrastructure can adequately support future demands for outgoing data exchanges to other federal and state agencies. We confirmed the information gathered from SSA’s regional data exchange coordinators and systems officials by corroborating it with selected other state and federal exchange partners. We determined that the information gathered was sufficiently reliable for the purposes of our review. Further, at the conclusion of our study in September 2009, we validated the examples of state and federal agencies’ experiences with SSA’s data exchanges with officials from those agencies. To select state and federal data exchange partners for this study, we identified a nonprobability sample based on our review of information provided by agency officials that listed and described all of the data exchange programs supported by SSA. From the list provided by the agency, we identified 814 outgoing data exchanges—those through which SSA provides data to state and other federal agencies. Of these exchanges, we identified 663 that provide SSA data to agencies in all 50 states, the District of Columbia, and 4 territories (American Samoa, American Virgin Islands, Guam, and Puerto Rico), and 83 that provide data to 16 federal agencies. From the 50 states, we selected 5 states based on the number of SSA data exchange programs in which they participate and the types of state programs the exchanges support (e.g., driver’s license issuance, voter registration, unemployment benefits administration, social services administration, and other programs). In selecting the 5 states, we identified 2 states that participated in the most data exchanges, 1 state that participated in a intermediate number of exchanges, and 2 states that participated in the fewest data exchanges. The 2 states participating in the most data exchanges were New York and California (25 and 19 data exchanges, respectively). The 1 state we selected with the intermediate number of data exchanges was Idaho (12 data exchanges). The 2 states participating in the fewest data exchanges were North Carolina and Iowa (10 and 9 data exchanges, respectively). We held discussions with officials from each of the states’ agencies that receive data through SSA’s data exchange programs to obtain information about any systems-related problems they may have experienced and the actions taken by SSA to resolve problems. Table 6 shows the states and agencies within those states that we contacted. We also held discussions with SSA’s major stakeholder user group organization—the American Association of Motor Vehicle Administrators (AAMVA)—which provides data exchange support to state driver’s licensing administrations and supports voter registration through the Help America Vote Verification system. We contacted chief information officers from each of the five selected states and AAMVA to obtain contact information for the officials responsible for the programs and systems that receive electronic data through participation in SSA’s data exchange programs. We based our selection of federal data exchange partners on the scope and impact of the agencies’ programs supported by SSA’s data exchanges on the country’s population, including veterans’ programs beneficiaries, government and private employers, U.S. passport holders and applicants, U.S. tax filers, and Medicare beneficiaries. We held discussions with agency officials regarding the following federal programs to obtain information about their experiences with SSA’s data exchange programs and the systems that support them: the Department of Veteran Affairs, Veterans Benefits Administration’s benefits and insurance programs, and the Veterans Health Administration health care administration program; the Department of Homeland Security employment eligibility program; the Department of State’s passport verification and foreign service retiree benefit payment programs; and the Department of Health and Human Services, Centers for Medicare and Medicaid Services’ Medicare benefits administration program. Since we selected a nonprobability sample, the information obtained through discussions with the selected state and federal data exchange partners is not generalizable across the entire population of SSA’s data exchange partners. In addition to the contact named above, key contributions to this report were also made by Teresa F. Tucker, Assistant Director; Michael A. Alexander; Tonia B. Brown; Barbara S. Collier; Rebecca E. Eyler; Neil J. Doherty; Nancy E. Glover; Jacquelyn K. Mai; Lee A. McCracken; Thomas E. Murphy; Madhav S. Panwar; and Brandon S. Pettis. | The Social Security Administration (SSA) receives electronic data from other agencies to support its own programs, and provides electronic data to support more than 800 state and federal agency partners. This information aids in, among other things, the processing and distribution of beneficiary payments and the delivery of services such as driver's license issuance and voter registration. SSA relies on its information technology (IT) infrastructure--its databases, applications, networks, and IT management practices--to support its current and future needs for exchanging data with its state and federal partners. GAO was asked to (1) determine the extent to which SSA's IT infrastructure effectively and efficiently supports current data exchanges, and any system-related problems affecting its exchange partners; and (2) describe SSA's efforts to ensure that its IT infrastructure can support the agency's and its partners' future data exchange environment. To do this, GAO analyzed agency documentation and interviewed SSA officials, as well as federal and state data exchange partners. Systems-related problems that affect SSA's ability to support outgoing data exchange programs have been few, and the agency has established effective procedures and mechanisms for addressing the problems that do occur. In this regard, SSA provides help-desk and on-site support to data exchange partners to help prevent or resolve problems, and uses procedures supported by a problem-identification and tracking system to facilitate problem resolution. State and federal partners with whom GAO held discussions stated that these efforts resulted in quick responses from SSA and effective resolution of problems that occurred. For example, a system that provides information for two data exchange programs that support driver's license issuance and voter registration in all 50 states was reported to have had almost 100 percent availability during the hours specified in the agreements governing these data exchanges. Further, all of the data exchange partners with whom GAO held discussions reported that the data that SSA provided were reliable. As a result, these partners stated that their ability to conduct business operations that depend on SSA data was not adversely affected by systems-related problems associated with SSA's IT infrastructure. SSA and its partners anticipate that the number of requests for outgoing data exchanges will continue to increase and that the exchanges will become more complex as agencies request that these exchanges take place through online, real-time transactions. However, SSA officials stated that the agency's existing IT infrastructure may not be able to support the increased demand that they and their partners anticipate. To address overall agency needs for a more cost-effective and efficient computing environment, the agency is taking steps to modernize its computing capabilities and supporting infrastructure. For example, the agency is in the process of implementing an updated database environment and upgrading its software applications--steps that are intended to enable expanded and more efficient IT service delivery, including the electronic exchange of data. However, the agency has not fully implemented IT management practices specifically related to its outgoing data exchange environment, such as conducting thorough analyses to project the expected increase in requests for data and online access. Conducting these analyses and using this information as input to the agency's target architecture (i.e., a formal description of the agency's future environment) are important practices to clearly define future requirements to guide the direction of the agency's data exchange programs. Implementing these management practices is essential to ensuring that the agency is well positioned to meet the growing needs of its data exchange partners. |
Under the authority of the Attorney General, EOIR interprets and administers federal immigration laws by conducting formal quasi-judicial proceedings, appellate reviews, and administrative hearings. EOIR consists of three primary components: OCIJ, which is responsible for managing the immigration courts located throughout the United States where immigration judges adjudicate individual cases; the Board of Immigration Appeals (BIA), which primarily conducts appellate reviews of immigration judge decisions; and the Office of the Chief Administrative Hearing Officer, which adjudicates immigration-related employment cases such as employer sanctions for employment of unauthorized immigrants. EOIR was established on January 9, 1983, as a result of an internal DOJ reorganization. This reorganization combined the BIA with the immigration judge function previously performed by the former INS. The Office of the Chief Administrative Hearing Officer was added in 1987. A Director who reports directly to the Deputy Attorney General heads EOIR. EOIR’s mission is to provide for the fair, expeditious, and uniform interpretation and application of immigration law. In support of this mission, one of EOIR’s strategic goals is to adjudicate all cases in a timely manner while assuring due process and fair treatment for all parties. According to its strategic plan for fiscal years 2005 through 2010, EOIR plans to accomplish this goal by, among other things, (1) eliminating case backlog by the end of fiscal year 2008, (2) implementing improved caseload management practices, and (3) adjudicating cases within specified time frames. As of October 1, 2005, EOIR had 1,182 authorized full-time permanent positions. OCIJ was the largest of the three primary components with 789 positions. The majority of these 789 positions (745) were in the immigration courts located throughout the nation. Of these 745 positions, 225 were immigration judges. The remaining court staff included 45 court/deputy court administrators, 367 assistants/clerks, and 108 court interpreters. OCIJ provides overall program direction, articulates policies and procedures, and establishes priorities for the immigration courts. OCIJ is headed by a Chief Immigration Judge who carries out these responsibilities with the assistance and support of two Deputy Chief Immigration Judges and nine Assistant Chief Immigration Judges (ACIJ). The ACIJs serve as the principal liaison between OCIJ headquarters and the immigration courts and have supervisory authority over the immigration judges, the court administrators, and judicial law clerks. At the court level, court administrators manage the daily court operations as well as the administrative staff. Currently there are 53 immigration courts including 17 courts that are co-located with a detention center, correctional facility, or service processing center and a court located at EOIR headquarters in Falls Church, Virginia, and numerous other hearing locations. The sizes of the immigration courts vary. In fiscal year 2005, the smallest of the 53 immigration courts (Fishkill in New York) consisted of 2 authorized legal assistants. In contrast, the largest court (New York City in New York) consisted of the following authorized staff: 27 immigration judges, 1 court administrator, 1 deputy court administrator, 46 assistants/clerks, and 8 court interpreters. The immigration judges are responsible for hearing all cases that come before them, and act independently in deciding the cases. They hear a wide range of immigration related cases that consist primarily of removal proceedings conducted to determine whether certain immigrants are subject to removal from the country. If DHS alleges a violation of immigration law(s) that is subject to adjudication by the immigration courts, it serves the immigrant with a charging document, ordering the individual to appear before an immigration judge. The charging document is also filed with the immigration court having jurisdiction over the immigrant, and advises the immigrant of, among other things, the nature of the proceeding; the alleged act(s) that violated the law; the right to an attorney at no expense to the government; and the consequences of failing to appear at scheduled hearings. Removal proceedings generally require an immigration judge to make: (1) a determination of the immigrant’s removability from the United States and (2), thereafter, if the immigrant applies, a decision whether the immigrant is eligible for a form(s) of relief from removal such as asylum, adjustment of status, cancellation of removal, or other remedies, or voluntary departure, which is an alternative to removal. Once an immigration judge orders the removal of an immigrant, DHS is responsible for carrying out the removal. As shown in figure 1, immigration court removal proceedings generally involve an initial master calendar hearing and, subsequently, an individual merits hearing. During the master calendar hearing, the immigration judge is to ensure that the immigrant understands the immigration violation charges and provide the immigrant information on available free of charge or low-cost legal representation in the area. During the individual merits hearing, the merits of the case are presented before the immigration judge by the immigrant, or the immigrant’s legal representative, and the DHS attorney who is prosecuting the case. DHS must prove that an immigrant is in the United States unlawfully and should be removed. In most cases, the immigration judge issues an oral decision at the conclusion of the individual merits hearing. The immigration judge may order the alien removed or may grant relief. If the immigration judge decides that removability has not been established by DHS, he or she may terminate the proceedings. Once a case is completed, if the immigrant or DHS disagrees with the immigration judge’s decision, either party or both parties may appeal the decision to the BIA. If the BIA ruling is adverse to the immigrant, the immigrant generally may file an appeal in the federal court system. According to EOIR, if DHS disagrees with the BIA’s ruling, in rare instances, the case may be referred to the Attorney General for review. From fiscal year 2000 through fiscal year 2005, the number of newly filed cases outpaced cases completed. Consequently, the immigration courts’ caseload increased about 39 percent, from about 381,000 cases at the end of fiscal year 2000 to about 531,000 cases at the end of fiscal year 2005. During the same period, in 4 of 6 years, the number of newly filed cases received was greater than the number of cases completed. The number of newly filed cases grew about 44 percent, from about 252,000 in fiscal year 2000, to about 363,000 in fiscal year 2005. On the other hand, the number of completed cases increased about 37 percent, from about 253,000 cases in fiscal year 2000, to about 347,000 cases in fiscal year 2005. (See fig. 2.) According to EOIR officials, the annual increase in newly filed cases can be driven by several factors. These factors include enhanced border and interior enforcement actions, changes in immigration laws and regulations, and emerging or special situations. The greatest increase (about 47,000 or 16 percent) in the number of cases completed by the immigration courts occurred between fiscal years 2004 and 2005. This increase is in large part because of an increase in the number of in absentia decisions—in cases where a judge orders an immigrant removed from the United States when the immigrant has not appeared for a scheduled removal hearing. The number of in absentia cases increased about 80 percent from about 70,000 cases in fiscal year 2004 to about 126,000 cases in fiscal year 2005. According to EOIR officials, in absentia cases require less time to complete because there is limited or no conflicting evidence for the court to hear and review when the immigrant does not appear to respond to the charge of removability. While there has been an increase in the number of immigration judges since fiscal year 2000, the immigration court caseload has grown at a much more rapid pace. The number of on-board immigration judges increased by 6 (about 3 percent), from 206 to 212 between fiscal years 2000 and 2005, while the immigration courts’ caseload increased about 39 percent during the same period. As a result, the average number of cases per on-board immigration judge has increased slightly more than 35 percent, from 1,852 in fiscal year 2000 to 2,505 in fiscal year 2005 (see fig. 3). In particular, the case-per-judge ratios were generally higher in southwestern border courts where the proportion of in absentia cases is also among the highest in the country. For example, in fiscal year 2005, the Harlingen and San Antonio immigration courts in Texas each had a case-per-judge ratio of over 8,000 compared to the average for all courts of 2,505. OCIJ has taken steps to reduce the age of proceedings awaiting adjudication. According to an OCIJ memorandum, in March 2003, the immigration courts established a priority for completing its older proceedings. The courts set a series of goals to complete all proceedings older than 4 years; since then, they have introduced additional goals targeting proceedings older than 3 years. OCIJ’s goals are summarized in table 1. Our analysis of the immigration courts’ proceedings data shows that while the courts have achieved success in reducing the number of proceedings older than 4 years between fiscal year 2003 and December 31, 2005, the courts did not meet their goal of completing all proceedings more than 3 years old by December 31, 2005 (see table 2). At the end of fiscal year 2003, the courts had 13,031 proceedings awaiting adjudication 3 or more years. Between fiscal year 2003 and December 31, 2005, the number of proceedings 6 or more years old was cut about 48 percent, from 1,058 to 547; the number of proceedings between 5 and 6 years old dropped to about a quarter of its fiscal year 2003 level from 2,375 to 547; and the number of proceedings between 4 and 5 years old decreased about 37 percent (3,185 to 2,010). However, at the end of December 2005, 9,412 proceedings remained open after 3 or more years. OCIJ monitors immigration courts’ caseload to assign cases to judges within a court. According to OCIJ, in general, the need for court personnel is driven by the immigration courts’ caseload. Specifically, OCIJ considers the number of newly filed cases and cases awaiting adjudication from prior years, historical data, and the nature of the caseload, such as the type of cases prevalent in the court and their complexity. As newly filed cases are received, OCIJ said that it evaluates the impact of these cases on the allocation of resources at the immigration courts. For example, according to OCIJ, through experience, it has learned that the immigration courts will have difficulty meeting and maintaining its case adjudication time goals when immigration judges have more than 1,050 and 1,500 newly filed cases involving non-detained and detained immigrants, respectively. Therefore, OCIJ attempts to keep the list of cases that appears on the judges’ calendars under these levels. In addition, on the basis of feedback from the courts, the responsible ACIJ notifies OCIJ headquarters of any unexpected increases in newly filed cases in a given court due to emerging or special situations, such as mass migration or enhanced border enforcement actions. According to OCIJ, if a pattern of need emerges, it reassigns personnel or provides other assistance, if available. OCIJ noted that the judges’ calendar of cases might vary among courts due to the type and complexity of the cases received. Thus, the case-per-judge ratios will be higher in some courts than others. Courts with a high number of change of venue cases (cases that are transferred from one court to another court) and/or in absentia cases that require less time to complete have a higher volume of cases per judge than courts with more merits asylum cases and other complex cases awaiting adjudication. For example, judges in the Harlingen and San Antonio immigration courts located in Texas are assigned a higher number of cases because these courts have a high number of change of venue and in absentia cases adjudicated in a given year compared to the San Francisco, California, New York City, New York, and Miami, Florida, immigration courts, where most cases are merits asylum hearings that require more time to complete. In fiscal year 2005, judges in the Harlingen and San Antonio immigration courts had, on average, over 8,000 cases compared to judges in San Francisco, New York City, and Miami immigration courts who had, on average, about 1,200, 1,500, and 2,400 cases, respectively. Within each immigration court, newly filed cases are generally assigned to immigration judges through an automated process; however, some flexibility exists. After a charging document has been filed, either DHS through an interactive scheduling system or immigration court staff are to enter data on newly filed cases in EOIR’s case management system. The case management system automatically assigns newly filed cases within each court on the basis of the next available judge’s calendar, rotating through all of the judges to equalize the number of cases assigned to each immigration judge. In addition, OCIJ stated that court staff has the flexibility to manually assign newly filed cases to a specific immigration judge rather than use the automated system. For example, the court administrator may manually schedule some cases to correct inequities that occurred in the number and type of cases that were assigned to a judge by the automated system. Also, cases that are re-entering the immigration court system are generally manually assigned to the immigration judge who had initially adjudicated the case. Further, if a judge already has a heavy caseload, OCIJ officials said that an ACIJ, through authority delegated by the Chief Immigration Judge, may decide to exclude a judge from assignment of newly filed cases through the automated system. EOIR’s Strategic Plan for fiscal years 2005 through 2010 states that it intends to consider changes in workload, establish better methods to project future workload, and adjust resources accordingly. Additionally, EOIR proposes to refine its current caseload management practices to ensure that cases move through the system as efficiently as possible. For example, EOIR plans to study the rates at which immigrants are failing to appear at their court proceedings and to schedule cases so that court time is used more efficiently. EOIR officials stated they are in the early stages of implementing the objectives outlined in the Strategic Plan. OCIJ’s process for managing court caseload is to monitor the caseload of each immigration court to identify those courts that are unable to meet their established goals for timely case adjudication, and provide assistance to these courts in meeting their goals. According to OCIJ, it primarily addresses immigration judge staff shortages at immigration courts through detailing judges from their assigned court to a court in need of assistance. Details usually occur to cover situations such as emerging needs that result in a surge of newly filed cases; staff shortages in a court due to illness, retirements, or annual leave; or the need to hear cases in other designated hearing locations. OCIJ advertises the detail opportunities to solicit volunteers. In selecting from the judges that volunteer, OCIJ said that it considers the needs of these immigration judges’ respective assigned courts. Volunteers from courts that have heavy caseloads and are not meeting their goals for timely case adjudication will usually not be selected. According to EOIR, it does not maintain readily available data on the number and duration of immigration judge details. OCIJ also uses available technology to address staff shortages. Many courts have the capability to use videoconferencing to conduct immigration hearings in other courts and locations such as detention centers and correctional facilities throughout the country. As of May 1, 2006, EOIR had videoconferencing capability at 47 of the 53 immigration courts, and 77 other locations where immigration hearings were conducted. According to OCIJ, videoconferencing allows immigration judges in one court to assist another immigration court with an unusually heavy caseload, on an ad hoc basis. For example, the two immigration judges in the court located at EOIR headquarters in Falls Church, Virginia, use videoconferencing to address short-term resource needs as they arise in the other immigration courts nationwide. OCIJ said that it will use this technology where available and feasible until this remedy is deemed insufficient to meet the needs of the courts. OCIJ also said that it has used videoconferencing as an interim measure while it assesses the ongoing need to establish a new immigration court. According to EOIR’s fiscal year 2005 performance work plans, ACIJs were expected to increase the usage of video technology to address case requirement needs of immigration courts. In addition, EOIR transfers responsibility for some hearing locations among immigration courts to more evenly distribute the caseload among immigration judges. For example, in July 2003, EOIR redistributed the Detroit, Michigan, immigration court’s caseload by transferring cases from Cincinnati and Cleveland, Ohio, to the Arlington, Virginia, court; and cases from Louisville, Kentucky, to the Memphis, Tennessee, court. According to EOIR, unless the parties are notified otherwise, immigration hearings continue to be conducted at the same hearing locations in each of these states, with immigration judges traveling to those locations or holding hearings by videoconference when appropriate. EOIR stated that these transfers are infrequent. When a pattern of sustained need emerges, OCIJ officials said that they recommend to EOIR establishing a court in a new location, usually a previous hearing location—especially if there is a significant distance to travel, along with significant travel costs. A permanent court is usually recommended if the hearing location can no longer be effectively covered by an existing immigration court (e.g., if a court fails to meet its goals for timely case adjudication). However, according to OCIJ, whether a new court can be established depends on the available resources. During fiscal years 2000 through 2005, EOIR established three new immigration courts. For example, in July 2005, EOIR established the newest immigration court in Salt Lake City, Utah, which was previously a hearing location of the Denver immigration court in Colorado. EOIR recently said that it will open a new court in Cleveland, Ohio, in August 2006 and is requesting funds to open four additional courts in fiscal year 2007. EOIR/OCIJ evaluates the immigration courts’ performance based on their success in meeting case completion goals and through peer evaluations of court operations. In addition, EOIR/OCIJ monitors complaints against immigration judges. To assist in ensuring that the immigration courts adjudicate cases fairly and in a timely manner—one of the agency’s stated strategic objectives— EOIR has established target time frames for each of OCIJ’s 11 case types. Each case type has an associated case completion goal (the percentage of cases to be completed within the established time frame). (See table 3 for a list of case types and their corresponding goals.) The case completion goals were formulated beginning in June 2000, when EOIR’s Director recognized that not all case types had completion time frames. Some case types had completion time frames established by law; others had long- standing agency completion time frames, while some had none. Consequently, EOIR’s Director solicited input from OCIJ regarding the impact and feasibility of establishing completion goals across all case types. OCIJ, in turn, solicited input from the immigration judges and court administrators. Over a 2-year period, EOIR collaborated with OCIJ to develop case completion goals for immigration courts covering the 11 case types. In May 2002, OCIJ formally implemented these goals. The courts’ success in meeting the goals for 4 of the 11 case types have been identified as adjudication priorities and are published in DOJ’s annual budget report and “Report on Performance and Accountability.” The “Report on Performance and Accountability” presents DOJ’s performance progress as required by the Government Performance and Results Act of 1993. EOIR documents the immigration courts’ success in meeting the case completion goals for the 11 case types in internal quarterly reports. According to EOIR, the case completion goal reports are intended to measure whether the courts are meeting their completion goals, not to define the total caseload of the courts (all cases awaiting adjudication). In developing these reports, EOIR management decided to exclude from the measurement certain categories of cases that, due to extenuating circumstances, are not expected to be completed within the established goals. For example, DHS is responsible for conducting background and security checks on all immigrants in immigration court proceedings. Since the courts cannot grant an applicant relief from removal until all checks have been favorably completed, these cases are exempted from case completion goals. As a result, the number of cases covered by the quarterly reports is less than the total court caseload. Additionally, depending on what cases are excluded from the case completion goals, the makeup of the cases included in the reports can change from one quarter to the next. These facts are not clearly reflected in the reports themselves. Our preliminary review of EOIR’s quarterly reports identified inconsistencies in some reports. For example, we noted a recurring inconsistency between reports: the number of cases awaiting adjudication at the end of a quarter was not the same as the number of cases awaiting adjudication at the beginning of the following quarter. EOIR provided several reasons for the inconsistency, as follows: (1) the EOIR case management system is a live data base that is constantly changing as events occur to immigration cases in the courts; (2) changes occur to the number of cases awaiting adjudication from one quarter to another when categories of cases are exempted from the case completion goals, since once a case is exempted it is no longer included in the reports; (3) cases double entered by DHS in the automated scheduling system were deleted; (4) reconciliations were necessary due to changes to date fields to update cases in the data base; (5) delays in data entry occurred; and (6) programming errors occurred in the calculation of the data. We could not evaluate the reasonableness of EOIR’s explanation; however, EOIR’s reasons did not appear to explain completely the inconsistency between the number of cases awaiting adjudication at the end of the quarter and the number of cases awaiting adjudication at the beginning of the following quarter. EOIR said that the agency does not use the quarterly reports to monitor and report on cases awaiting adjudication; rather, other comprehensive reports serve that purpose. According to EOIR, the case completion goal reports have a specific purpose: to report solely on the percentage of cases completed within the goals for the appropriate reporting period. EOIR stated that it evaluates the case completion goal data against other sources of data to ensure the accuracy of the case completion goal data prior to release within the agency, following established protocols. We also identified inconsistencies in a 2002 report where the reported total number of completions did not equal the sum of its components. EOIR responded to our inquiry about this inconsistency that a programmer had used the wrong end date for a quarter and therefore retrieved more cases than should have been included. EOIR has changed its criteria for compiling the case completion goal reports over time, as EOIR management has established new specifications to identify the cases to be included in the case completion goals. When the agency approves categories of cases to be excluded from the reports, the queries used to run the reports are updated accordingly. EOIR reported that it maintains the historical documentation of the changes it has made to the reports through memos approved by EOIR management outlining each change in the case completion goal criteria. However, EOIR does not maintain the individual queries used to run each of the prior quarterly reports; it only maintains the current set of queries. As a result, we could not replicate the past reports to determine the accuracy of the case completion goal data. The inconsistencies indicate that EOIR should maintain appropriate documentation to demonstrate the accuracy of data reported by EOIR. Another means that EOIR/OCIJ uses to evaluate its courts’ performance is peer evaluation—its Immigration Court Evaluation Program (ICEP). The ICEP was established in July 1997 to evaluate court operations based on objectives established by OCIJ, identify challenges to achieving agency goals, and recommend appropriate corrective measures. The evaluation program seeks to make recommendations for improving court operations by evaluating the courts’ organizational structure, caseload, and workflow processes to assess the efficiency of the court in accomplishing its mission. Judges’ individual hearing decisions are the only aspect of court operations that are not evaluated. OCIJ established a Court Evaluation Unit (CEU) to manage the coordination and operation of the court evaluation program. The CEU selects courts to be evaluated, notifies the courts being selected, prepares an evaluation schedule, and sends out pre-evaluation questionnaires. While the Chief Immigration Judge selects the evaluation team members, the CEU is responsible for training the evaluation team as well as identifying a team leader. The evaluation team is comprised of volunteers of one or more immigration judge(s), court administrator(s), court interpreter(s), and legal technician(s). The participation of team members from diverse courts and positions is intended to facilitate the exchange of information regarding best practices of court operations. The size of the evaluation team depends on the size of the court being evaluated. For example, in fiscal year 2004, the team that evaluated the Bradenton immigration court in Florida, a small court with 2 authorized full-time permanent immigration judges, consisted of 3 team members, while the team that evaluated the Miami immigration court in Florida, a large court with 21 authorized full-time permanent immigration judges, had 13 members. OCIJ has established an evaluation program cycle in which approximately 10 to 12 courts have been evaluated per year. Each court has typically been evaluated approximately once every 4 years. During the onsite visit, the evaluation team gathers information about the court under review in a variety of ways. The evaluation team conducts interviews with local court personnel, DHS officials, and members of the private bar. Evaluation team members select and review a random sample of court files and administrative records maintained by the court. While conducting interviews and reviewing court documentation, the evaluation team assesses aspects of court operations: immigration court initiatives, security, case management and case processing, DHS/immigration bar relations, administrative operations, and database management. As shown in figure 5, the ICEP is comprised of a five-stage process. Following the week long onsite visit, the evaluation team summarizes the evaluation findings and recommendations and prepares a draft report for the Chief Immigration Judge’s review. Within 10 business days of receipt of the draft report, the evaluated court is to submit written comments on the draft report. After reviewing the draft report and court’s comments, the Chief Immigration Judge prepares an action plan addressing the draft report’s specific recommendations—the action plan clarifies which corrective actions will be taken, who will be responsible for completing that action, and the date by which the action must be completed. Approximately 3 months after completion of the action plan, the court must submit a written “Self-Certification” attesting to the actions taken to implement the action plan. After receipt of the self-certification, the CEU drafts a final report for the Deputy Chief Immigration Judge’s signature. After the court evaluation process is complete, the final evaluation report is distributed to the EOIR Director and Deputy Director, the Chief Immigration Judge, the Deputy Chief Immigration Judges, the responsible ACIJ, the liaison immigration judge and court administrator for the evaluated court, the chief clerk of the immigration court, all evaluation team members, and the CEU program analyst. EOIR/OCIJ also monitors complaints against immigration judges, a practice that began in October 2003, at the direction of the EOIR Director. Since then, complaint reports have been generated on a monthly basis for internal use only. According to EOIR, the goal of the reports is to provide a centralized and comprehensive compilation of written and oral complaints to EOIR management regarding immigration judges’ conduct on the bench, as well as the status of those complaints. OCIJ sends the reports to the EOIR Director on a monthly basis. Complaints against immigration judges are received from a variety of sources, including immigrants, the immigrants’ attorneys, DHS trial attorneys, other immigration judges, other court staff, OCIJ headquarters staff, and others. They are raised to OCIJ management either orally or in writing, primarily from the ACIJ with supervisory responsibility over the affected immigration judge. In meetings with the DHS components and the American Immigration Lawyers Association, EOIR said that it has advised them that their employees or members should raise complaints, as issues arise, to the appropriate ACIJ. According to EOIR, OCIJ is to immediately notify the EOIR Director when a complaint is filed against an immigration judge, even if OCIJ has not had an opportunity to verify the accuracy of the allegation. According to EOIR, the ACIJ with supervisory responsibility over the affected immigration judge is the responsible party for addressing the complaint, unless a referral to DOJ’s Office of Professional Responsibility is deemed warranted. The Office of Professional Responsibility, which reports directly to the Attorney General, is responsible for investigating allegations of misconduct involving Department attorneys, investigators, or law enforcement personnel, where the allegations relate to the exercise of the authority of an attorney to investigate, litigate, or provide legal advice. Once a referral is deemed warranted, either OCIJ, through EOIR’s Office of General Counsel, or the Office of General Counsel can refer a matter to the Office of Professional Responsibility for investigation. Matters involving criminal or serious administrative misconduct such as an allegation that a judge had a business relationship with an immigration attorney are referred to the DOJ’s Office of the Inspector General. According to its complaint reports, OCIJ received 129 complaints against immigration judges during fiscal years 2001 through 2005. As of September 30, 2005, OCIJ had taken action on 121 of these complaints; the remaining 8 were still under review. In response to the 121 complaints, OCIJ took 134 actions. The actions taken were as follows: about 25 percent (34) were found to have no merit; about 25 percent resulted in disciplinary actions against the judges that included counseling (18), written reprimand (9), oral reprimand (3), and suspension (4); about 22 percent (29) were referred to DOJ’s Office of Professional Responsibility or Office of the Inspector General or EOIR’s Office of General Counsel for further review; and the remaining 28 percent (37) resulted in various other actions such as informing complainants of the Office of Professional Responsibility process or their appeal rights to BIA. In January 2006, the Attorney General requested a comprehensive review of the immigration courts, to include the quality of work as well as the manner in which it is performed. According to DOJ officials, the review was initiated in part in response to complaints about the professionalism of immigration judges, including their treatment of the people appearing before them and the quality of their work. The review included, among other things, interviews with selected court personnel, private attorneys and immigration organizations, observations of court hearings, and on-line surveys of other court personnel and DHS trial attorneys. On August 9, 2006, the Attorney General announced the completion of the review and a number of reforms to improve the performance and quality of the immigration court system. They include, among other reforms, the establishment of performance evaluations for immigration judges; the development of an immigration law examination for newly appointed immigration judges; the hiring of more immigration judges and judicial law clerks; and improvements in technology and support to strengthen the courts’ ability to record, transcribe, and interpret court proceedings. EOIR and its immigration courts play a critical role in upholding immigration law. Immigrants depend upon the courts to ensure the timely and fair adjudication of their cases, and U.S. residents depend upon the courts to order the removal of individuals from the United States who lack a legal right to be here. If the increase in caseload continues to outpace the growth in the number of immigration judges, the strain on the immigration courts will likely intensify. Given these conditions, EOIR will be challenged to judiciously manage its caseload and improve its courts’ performance. EOIR has taken steps to improve the immigration courts’ performance. As part of this process, EOIR has used quarterly case completion goal reports that contained inconsistencies. However, EOIR’s lack of historical data on the individual queries used to run each quarterly report precluded our ability to replicate the data and determine the accuracy of the reports. By better documenting its case completion goal data, EOIR would enable users of the data, including members of its management, to better understand exactly what is being measured and the data’s implications for the courts’ efficiency. To more accurately and consistently reflect the immigration courts’ progress in the timely adjudication of immigration cases, we recommend that the Director of EOIR (1) maintain appropriate documentation to demonstrate the accuracy of case completion goal reports; and (2) clearly state what cases are being counted in the reports. After reviewing a draft of this report, EOIR responded in an e-mail that it concurred with GAO’s recommendations. EOIR also provided technical comments, which we have included as appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its date. At that time, we will send copies to the Attorney General, the Director of EOIR, and interested congressional committees. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report or wish to discuss it further, please contact me at (202) 512-8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report can be found in appendix III. Our objectives in this report are to answer the following questions: (1) in recent years, what has been the trend in immigration courts’ caseload, (2) how does the Office of the Chief Immigration Judge (OCIJ) assign and manage immigration court caseload, and (3) how does the Executive Office for Immigration Review (EOIR)/OCIJ evaluate the immigration courts’ performance? To address these objectives, we met with officials from the Department of Justice’s EOIR headquarters to obtain information and documentation on caseload trends, caseload management, and evaluation of immigration courts. To gain a better understanding of the operations and management of immigration courts, we also visited four immigration courts—Arlington in Arlington, Virginia; Newark in Newark, New Jersey; and two courts in New York City, New York. We selected these four courts to include courts varying in size, based on the number of immigration judges. At these locations, we observed court proceedings and met with immigration judges, court administrators, and attorneys that litigate cases before the immigration courts—attorneys from the Office of Chief Counsel of DHS’s Immigration and Customs Enforcement and private bar attorneys. Furthermore, we obtained and analyzed case information contained in EOIR’s case management system as well as staffing data for fiscal years 2000 through 2005 and OCIJ’s reports for court evaluations conducted in fiscal years 2000 and 2004. We also interviewed representatives of the National Association of Immigration Judges, the American Immigration Lawyers Association, and the American Bar Association, Commission on Immigration. To address the first objective concerning the trend in immigration courts’ caseload in recent years, we reviewed data from EOIR’s case management system, Automated Nationwide System for Immigration Review, and obtained and reviewed relevant documents, regulations, and policies pertaining to the immigration courts’ caseload and factors affecting caseload. We assessed the reliability of those data needed to answer this objective by (1) performing electronic testing for obvious errors in accuracy and completeness, (2) reviewing related documentation about the data and the system that produced them, including a contractor’s report on data verification of the case management system, and (3) interviewing agency officials knowledgeable about the data. We determined that the data were sufficiently reliable for the purposes of this report. From this system, we generated immigration court caseload data for fiscal years 2000 through 2005 for all cases–-proceedings, bond redeterminations, and motions to reopen or reconsider–--and analyzed them for accuracy and completeness. Using SAS software, based on criteria provided by EOIR, we generated and reviewed unique data at both the global and immigration court level, on the number of newly filed cases, cases awaiting adjudication, completed cases, and in absentia decisions, as well as the age of proceedings awaiting adjudication. To address the second objective concerning how OCIJ assigns and manages immigration courts’ caseload, we conducted interviews with OCIJ officials, conducted site visits to four immigration courts, and reviewed EOIR’s authorized and on-board staffing data for fiscal years 2000 through 2005, as well as their procedures for detailing immigration judges. We also reviewed policies, procedures, and other documents relating to OCIJ’s caseload management. According to EOIR, the staffing data are from the Department of Agriculture’s National Finance Center database, which handles payroll and personnel data for DOJ and other agencies. While we did not independently verify the reliability of the staffing data, we compared them with other supporting documents, when available, to determine data consistency and reasonableness. To address the third objective concerning how EOIR/OCIJ evaluates the immigration courts’ performance, we obtained and reviewed from EOIR internal quarterly case completion goal reports for fiscal years 2001 to 2005; documents concerning the establishment and refinement of the case completion goals; 22 court evaluation reports and related documents for the 12 immigration courts evaluated in fiscal years 2000 and 2004; and monthly reports containing information on complaints against immigration judges received in fiscal years 2001 to 2005. Further, we reviewed relevant memos and documents prepared by EOIR officials pertaining to EOIR’s monitoring and evaluation programs, as well as the Department of Justice’s “Report on Performance and Accountability” and budgets for fiscal years 2000 through 2005. To assess the reliability of EOIR’s case completion goal reports, we (1) performed logic testing of the data for obvious inconsistencies in accuracy and completeness and (2) interviewed and sent questions to agency officials knowledgeable about the reports. We also reviewed the relevant internal control standards for such reports. When we found inconsistencies in the reports we brought them to the EOIR officials’ attention and they provided reasons for the inconsistencies. However, we could not evaluate the reasonableness of EOIR’s explanations of the inconsistencies or the overall reliability of each of its quarterly reports because EOIR has changed its criteria for compiling the reports over time and only maintains documentation on the current set of queries used to run the reports. Therefore, we determined that the data in the quarterly reports were not sufficiently reliable for purposes of this report. We conducted our work from March 2005 through August 2006 in accordance with generally accepted government auditing standards. A type of relief from deportation, removal, or exclusion for an immigrant who is eligible for Lawful Permanent Resident status based on a visa petition approved by the Department of Homeland Security (DHS). The status of an immigrant may be adjusted by the Attorney General, in his discretion, to that of a lawful permanent resident if a visa petition on behalf of the immigrant has been approved, an immigrant visa is immediately available at the time of the immigrant’s application for adjustment of status, and the immigrant is not otherwise inadmissible to the United States. An asylum application initially filed with DHS’s U.S. Citizenship and Immigration Services. Immigrants may request a number of forms of relief or protection from removal such as asylum, withholding of removal, protection under the Convention Against Torture, adjustment of status, or cancellation of removal. Many forms of relief require the immigrant to fill out an appropriate application. An immigrant may be eligible for protection and immunity from removal if he or she can show that he or she is a “refugee.” The Immigration and Nationality Act generally defines a refugee as any person who is outside his or her country of nationality or, in the case of a person having no nationality, is outside any county in which such person last habitually resided, and who is unable or unwilling to return to, and is unable or unwilling to avail himself or herself of the protection of, that country because of persecution or a well-founded fear of persecution on account of race, religion, nationality, membership in a particular social group, or political opinion. Immigrants generally must apply for asylum within 1 year of arrival in the United States. In the absence of exceptional circumstances, final administrative adjudication of the asylum application, not including administrative appeal, must be completed within 180 days after the date the application is filed. The DHS may detain an immigrant who is in removal or deportation proceedings and may condition his or her release from custody upon the posting of a bond to ensure the immigrant’s appearance at the hearing. The amount of money set by DHS as a condition of release is known as a bond. A bond may be as a condition of voluntary departure at the master calendar, and a bond must be set by an immigration judge as a condition for allowing an immigrant to voluntarily leave the country at the conclusion of proceedings. When DHS has set a bond amount as a condition for release from custody or has determined not to release the immigrant on bond, the immigrant has the right to ask an immigration judge to redetermine the bond. In a bond redetermination hearing, the judge can raise, lower, or maintain the amount of the bond; however, the Immigration and Nationality Act provides that bond of at least $1,500 is required before an immigrant may be released. In addition, the immigration judge can eliminate the bond; or change any of the bond conditions over which the immigration court has authority. The bond redetermination hearing is completely separate from the removal or deportation hearing. It is not recorded and has no bearing on the subsequent removal or deportation proceeding. The immigrant and/or DHS may appeal the immigration judge’s bond redetermination decision to the Board of Immigration Appeals. There are two different forms of cancellation of removal: (A) Cancellation of removal for certain lawful permanent residents who were admitted more than 5 years ago, have resided in the United States for 7 or more years, and have not been convicted of an aggravated felony. Application for this form of discretionary relief is made during the course of a hearing before an immigration judge. (B) Cancellation of removal and adjustment of status for certain nonpermanent resident immigrants who have maintained continuous physical presence in the United States for 10 years and have met all the other statutory requirements for such relief. Application for this form of discretionary relief is made during the course of a hearing before an immigration judge. The status of an immigrant who is granted cancellation of removal for certain nonpermanent resident immigrants is adjusted to that of an immigrant lawfully admitted for permanent residence. All proceedings, bond redeterminations, and motions to reopen or reconsider that are before the immigration courts. A case that has not been completed. A case is considered completed once an immigration judge renders a decision. Proceedings may also be completed for other reasons, such as administrative closures, changes of venue, and transfers. All cases awaiting adjudication. lmmigration judges, for good cause shown, may change venue (move the proceeding to another immigration court) only upon motion by one of the parties, after the charging document has been filed with the immigration court. The regulation provides that venue may be changed only after one of the parties has filed a motion to change venue and the other party has been given notice and an opportunity to respond. A written instrument prepared by DHS charging an immigrant with a violation of immigration law. If an immigrant in expedited removal proceedings claims under oath to be a U.S. citizen, to have been lawfully admitted for permanent residence, to have been admitted as a refugee, or to have been granted asylum, and DHS determines that the immigrant has no such claim, he or she can obtain a review of that claim by an immigration judge. If an immigrant seeking to enter the United States has no documents or no valid documents to enter, but expresses a fear of persecution or torture, or an intention to apply for asylum, that immigrant will be referred to a DHS asylum officer for a credible fear determination. If the asylum officer determines that the immigrant has not established a credible fear of persecution or torture and a supervisory asylum officer concurs, the immigrant may request review of that determination by an immigration judge. That review must be concluded as expeditiously as possible, to the maximum extent practicable within 24 hours, but in no event later than 7 days after the date of the determination by the supervisory asylum officer. No appeal to the Board of lmmigration Appeals may be taken from the immigration judge’s decision finding no credible fear of persecution or torture. If the immigration judge determines that the immigrant has a credible fear of persecution or torture, the immigrant will be placed in removal proceedings to apply for asylum. A determination and order arrived at after consideration of facts and law, by an immigration judge. An asylum application initially filed with the immigration court after the immigrant has been put into proceedings to remove him or her from the United States. Detained immigrants are those in the custody of DHS or other entities. lmmigration court hearings for detained immigrants are conducted in DHS Service Processing Centers, contract detention facilities, state and local government jails, and Bureau of Prisons’ institutions. Asylum regulations implemented in 1995 mandated that asylum applications be processed within 180 days after filing either at a DHS U.S. Citizenship and lmmigration Services Asylum Office or at an immigration court. The Illegal lmmigration Reform and Immigrant Responsibility Act of 1996 reiterated the 180-day rule. Consequently, expedited processing of asylum applications occurs when (1) an immigrant files “affirmatively” at an Asylum Office on or after January 4, 1995, and the application is referred to the EOIR by DHS within 75 days of the filing; or (2) an immigrant files an application “defensively” with EOlR on or after January 4, 1995. A filing occurs with the actual receipt of a document by the appropriate immigration court. lmmigration judge is an attorney whom the Attorney General appoints as an administrative judge within EOIR, qualified to conduct specified classes of proceedings, including exclusion, deportation, removal, asylum, bond redetermination, rescission, withholding, credible fear, reasonable fear, and claimed status review. lmmigration judges act as independent decision makers in deciding the matters before them. lmmigration judge decisions are administratively final unless appealed or certified to the Board of lmmigration Appeals, or if the period by which to file an appeal lapses. A Latin phrase meaning “in the absence of.” An in absentia hearing occurs when an immigrant fails to appear for a hearing and the immigration judge conducts the hearing without the immigrant present and orders the immigrant removed from the United States. An immigration judge is to order removed in absentia any immigrant who, after written notice of the time and place of proceedings and the consequences of failing to appear, fails to appear at his or her removal proceeding. The DHS must establish by clear, unequivocal, and convincing evidence that the written notice was provided and that the immigrant is removable. The Illegal lmmigration Reform and Immigrant Responsibility Act of 1996 replaced the term “excludable” with the term “inadmissible.” Section 212 of the Immigration and Nationality Act defines classes of immigrants ineligible to receive visas and ineligible for admission. Immigrants who, at the time of entry, are within one of these classes of inadmissible immigrants are removable. The hearing in which the government must prove the charges alleged in the charging document. The immigrant also is able to present his or her case to the immigration judge with witnesses and persuade the immigration judge to use his or her discretion and allow the immigrant to remain in the United States (if such relief exists). The lmmigration Reform and Control Act of 1986 requires the Attorney General to expeditiously commence immigration proceedings for immigrant inmates convicted of crimes in the United States. To meet this requirement, the Department of Justice established the Institutional Hearing Program where removal hearings are held inside correctional institutions prior to the immigrant completing his or her criminal sentence. The Institutional Hearing Program is a collaborative effort between EOIR and DHS and various federal, state, and local corrections agencies throughout the country. A preliminary hearing held to review the charges in the charging document before an immigration judge. The immigration judge explains the immigrant’s rights (e.g., the immigrant’s right to an attorney) and asks if the immigrant agrees with or denies the charges as alleged by DHS in the charging document. The immigration judge determines if the immigrant is eligible for any form(s) of relief, and sets a date for the individual merits hearing. A motion is a formal request from either party (the immigrant or DHS) in proceedings before the immigration court, to carry out an action or make a decision. Motions include, for example, motions for change of venue, motions for continuance, motions to terminate proceedings, etc. Immigrants may request, by motion, the reconsideration of a case previously heard by an immigration judge. A motion to reconsider either identifies an error in law or fact in a prior proceeding or identifies a change in law and asks the immigration judge to re-examine his or her ruling. A motion to reconsider is based on the existing record and does not seek to introduce new facts or evidence. Either party makes a formal request before the immigration court to reopen the case. The status of an immigrant who is not in the custody of DHS or the Institutional Hearing Program. The document (Form 1-862) used by DHS to charge an immigrant with being removable from the United States. Jurisdiction vests and proceedings commence when a Notice to Appear is filed with an immigration court by DHS. The legal process conducted before the immigration court. In hearings before an immigration judge, an immigrant may be able to seek relief from removal. Various types of relief may be sought, including asylum, withholding of removal, protection under the Convention Against Torture, cancellation of removal, or adjustment of status. Many forms of relief require the immigrant to fill out an appropriate application. An immigration court proceeding begun on or after April 1, 1997, seeking to either stop certain immigrants from being admitted to the United States or to remove them from the United States. A removal case usually arises when DHS alleges that an immigrant is inadmissible to the United States, has entered the country illegally by crossing the border without being inspected by an immigration officer, or has violated the terms of his or her admission. The DHS issues a charging document called a Notice to Appear and files it with an immigration court to begin a removal proceeding. An immigrant agrees to depart from the United States without an order of removal. The departure may or may not have been preceded by a hearing before an immigration judge. An immigrant allowed to voluntarily depart concedes removability but is not barred from seeking admission at a port of entry in the future. Failure to depart within the time granted results in a fine and a 10-year bar against the immigrant applying for several forms of relief from removal. In addition to the contact named above, Eric Bachhuber, Frances Cook, Katherine Davis, Evan Gilman, Clarette Kim, Grant Mallie, Katrina Moss, Sandra Tasic, Margaret Vo, and Robert White made key contributions to this report. | Within the Department of Justice's (DOJ) Executive Office for Immigration Review (EOIR), the Office of the Chief Immigration Judge (OCIJ) is responsible for managing the 53 immigration courts located throughout the United States where over 200 immigration judges adjudicate individual cases involving alleged immigration law violations. This report addresses: (1) in recent years, what has been the trend in immigration courts' caseload; (2) how does OCIJ assign and manage the immigration court caseload; and (3) how does EOIR/OCIJ evaluate the immigration courts' performance? To address these issues, GAO interviewed EOIR officials; reviewed information on caseload trends, caseload management, and court evaluations; and analyzed caseload data, case completion goal data, and OCIJ court evaluation reports. From fiscal years 2000 to 2005, despite an increase in the number of immigration judges, the number of new cases filed in immigration courts outpaced cases completed. During this period, while the number of on-board judges increased about 3 percent, the courts' caseload climbed about 39 percent from about 381,000 cases to about 531,000 cases. The number of completed cases increased about 37 percent while newly filed cases grew about 44 percent. EOIR attributes this growth in part to enhanced border enforcement activities. The courts reduced the number of proceedings awaiting adjudication for more than 4 years, but did not meet their goal to complete all proceedings more than 3 years old by December 31, 2005. OCIJ relies primarily on an automated system to assign cases to immigration judges within a court. To balance the judges' caseload, OCIJ considers the number of newly filed cases and cases awaiting adjudication from prior years, historical data, and the type and complexity of cases. To manage its growing caseload, OCIJ, among other means, details judges from their assigned court to a court in need of assistance and uses available technology such as video conferencing. According to OCIJ, if it recognizes a pattern of sustained need, it recommends that EOIR establish a court in a new location. EOIR evaluates the performance of the immigration courts based on the immigration courts' success in meeting case completion goals. GAO's review of EOIR's quarterly reports on these goals identified a recurring inconsistency between reports as well as other inconsistencies. EOIR explained that these inconsistencies were due to a variety of factors, including the exemption of different categories of cases from the goals in different quarters, delays in data entry, and programming errors in the calculation of the data. Because EOIR has changed its criteria for cases covered by these goals and only maintained the queries for its current reporting process, GAO could not replicate past case completion reports to determine their accuracy. The inconsistencies indicate that EOIR should maintain appropriate documentation to demonstrate the reports' accuracy. |
The importance and potential vulnerability of our nation’s ports are well documented. National ports and waterways are responsible for moving over 99 percent of the volume of overseas cargo, with over $5.5 billion worth of goods moving in and out of U.S. ports every day, according to the American Association of Port Authorities. With more than half of the crude oil and all of the liquefied natural gas used in the country in 2005, any disruption in the flow of commerce could have major economic consequences. As vital as ports are to the country, they are susceptible to terrorist acts due to their size and openness—easily accessible by water and land and are attractive targets given the proximity of many ports to urban areas and collocation with power plants, oil refineries, and other energy facilities. Efforts to address port vulnerabilities face the challenge of having to consider the impact that an increase in security may have on the operation of commerce and the impact on maritime facility operators of costly security requirements. Particularly with “just in time” deliveries, which rely on the quick movement of goods, steps added to the process to increase security may have economic consequences. Actions to improve security are undertaken with the knowledge that total security cannot be bought no matter how much is spent on it because of the difficulty of anticipating and addressing all security concerns. MTSA established a framework to help protect the nation’s ports and waterways from terrorist attacks by mandating a wide range of security improvements. Among the major requirements included in MTSA were those related to facilities located in, on, under or adjacent to waters subject to the jurisdiction of the United States that the Secretary of DHS believes may be involved in a transportation security incident. MTSA and Coast Guard implementing regulations establish requirements for owners and operators currently at about 3,200 select port facilities. In general, facilities that receive vessels that carry large or hazardous cargo, vessels subject to international maritime security standards, selected barges, and passenger vessels certified to carry more than 150 passengers are subject to MTSA regulations. Owners or operators of facilities subject to MTSA regulations (MTSA facilities) were required, among other things, to designate a Facility Security Officer (FSO), ensure that a facility security risk assessment was conducted, and ensure that a facility security plan was approved and implemented. The basic aim of such plans is to develop measures to mitigate potential vulnerabilities that could otherwise be exploited to kill people, cause environmental damage, or disrupt transportation systems and the economy. Facility Security Plans (FSP) encompass a range of security activities, such as access controls and security training to prevent a security incident. MTSA and its regulations set out requirements that are performance-based rather than requiring specific procedures or equipment, thus allowing flexibility for meeting the law’s requirements. For example, a facility’s plan must include measures to control access to the facility, but how access should be specifically controlled is not mandated by MTSA or its implementing regulations. The Coast Guard is largely responsible for administering MTSA requirements. For facilities, in addition to issuing regulations, the Coast Guard is responsible for review and approval of facility security plans, ensuring that facilities implement the plans, verifying that facilities continue to adhere to their plans, and for re-approving facility security plans periodically, which were established by Coast Guard regulations as valid for 5 years. The Coast Guard reported that security plans required for over 3,000 MTSA facilities as of July 1, 2004 were approved, and that it had verified that these plans were in place by December 31, 2004. With the 5- year approval of facility security plans complete, the focus shifted to ensuring continued compliance with security measures that have been implemented. We reviewed the Coast Guard’s early MTSA implementation and identified short- and long-term challenges to the Coast Guard’s May 2004 strategy for monitoring and overseeing security plan implementation. Key concerns were how the Coast Guard planned to ensure that enough inspectors were available, that they would have a training program sufficient to overcome major differences in inspector experience levels, and that inspectors would be equipped with adequate guidance to help conduct thorough, consistent reviews. Further, we reported that the Coast Guard faced the challenge of ensuring that owners and operators continue implementing their plans and do not mask security problems in ways that do not represent the normal course of business. In this regard, our work has shown that there are options the Coast Guard could consider beyond regularly scheduled visits, such as unscheduled, unannounced visits, and covert testing. We recommended that the Coast Guard evaluate its initial compliance efforts and use the information to strengthen the compliance process for its long-term strategy. Coast Guard activities related to MTSA facility security plan approval and facility oversight are captured in the Coast Guard’s MISLE database. MISLE began operating in December 2001 as the Coast Guard’s primary data system for documenting marine safety and environmental protection activities. Storage of data on MTSA facility oversight and that of other Coast Guard activities, such as vessel boardings and incident response have since been added. The purpose of MISLE is to provide the capability to collect, maintain, and retrieve information necessary for the administration, management and documentation of Coast Guard activities. Data on facilities are entered by inspectors on an intranet website using dropdown menus and narrative fields related to a specific compliance activity. The information maintained in MISLE is varied, as shown by the entry screen reproduced in figure 1. Limitations in Coast Guard’s compliance database preclude it from being able to document whether all facilities received an annual exam each year. Coast Guard officials said field units report that they are meeting their inspection requirements, but inspections may not be documented in the compliance database, or inspections may have been delayed by staff being diverted to meet higher-priority needs. The available data indicate that the Coast Guard also conducted many spot checks, but prior to the SAFE Port Act’s requirement for an annual unannounced inspection of each facility, these spot checks were concentrated in about one-third of regulated facilities. The types of deficiencies identified most often during annual exams and spot checks fell into five main categories, with the top two categories—not adhering to facility plans regarding access controls (such as gates and fences) and lack of documentation (such as no record of drills) accounting for over a third of deficiencies. Relatively few facilities in the Coast Guard sectors we visited had many or substantial deficiencies, and Coast Guard officials provided anecdotal evidence that security had generally improved over time. The Coast Guard sectors varied in the extent to which they resolved deficiencies using formal enforcement actions such as written warnings or fines, although overall over 80 percent of deficiencies were resolved without formal actions. Coast Guard officials at headquarters and the sectors we visited reported MTSA facilities subject to maritime facility inspection requirements were being inspected. At sectors we visited, Coast Guard officials based this assessment on data from MISLE supplemented by knowledge of facilities under their jurisdiction. Sector officials, like headquarters officials, cannot use MISLE to identify all facilities that were subject to inspection because of flaws in the MISLE database. Some sectors mentioned that they also maintained local spreadsheets documenting exams. Headquarters officials said that they based their assessment on information requested from field units regarding whether the units were meeting annual exam requirements, although they acknowledged that there were some situations in which annual inspections might not have been conducted within the year. Reasons this official and others cited for some facilities possibly not receiving an exam during 2006 included the following: Inspectors were diverted to a higher-priority mission. Officials said that activities conducted after Hurricanes Rita and Katrina disrupted inspection activities in the areas affected by the hurricanes and diverted Coast Guard resources from other regions. In the Upper Mississippi River sector, officials similarly reported inspectors being detailed to respond to floods in North Dakota. One inspector said it took an additional 6 months to complete on-the-job training needed be certified as an inspector because of the time she spent detailed away from the sector. MISLE data may not reflect all the annual exams that were conducted. For example, officials said that an annual compliance exam could have been conducted while inspectors conducted a pollution inspection, but the activity was only entered as a pollution inspection. No information was available to identify annual exams conducted but not recorded. Definitive information about the extent to which all facilities were inspected is not available, because the Coast Guard’s MISLE database does not have the capability to document the extent to which MTSA facilities received an annual inspection for a particular year. The database can identify which facilities received annual exams in a particular year, but it cannot identify those facilities that did not receive exams but should have. Our analysis of MISLE data on the number of exams reported, however, indicates the total is less than the number of facilities the Coast Guard believes it is regulating. The Coast Guard estimates the number of MTSA facilities at about 3,200 nationwide, based on the number of facility security plans currently approved. Our analysis of MISLE data indicated 2,126 facilities received exams during 2006. Coast Guard data show that prior to the SAFE Port Act’s requirement that each facility receive an unannounced inspection, Coast Guard units were conducting unannounced spot checks, but not at every facility. MISLE data indicate the Coast Guard conducted about 4,500 spot checks in 2006, covering about 1,200 facilities. The pattern was similar in 2005, the first full year of facility oversight (see fig. 2). The SAFE Port Act’s requirement for each facility to receive two inspections was not effective until October 2006. Coast Guard officials said that, prior to the SAFE Port Act’s new unannounced inspection requirement, units used a combination of risk and convenience to decide which facilities should receive spot checks. As a result, some facilities received a number of checks in a year’s time, while others received none. For example, Coast Guard officials at two sectors said if inspectors are frequently at a facility to examine arriving vessels, they also have an opportunity to conduct a spot check of the facility’s security measures. Several sectors we visited mentioned that they had a goal, even before the new requirement took effect, of spot checking every facility, but officials at these sectors said the risk-based approach took precedence, leading to numerous checks at facilities with higher risk. Given the resources provided in DHS fiscal year 2007 appropriations, related Coast Guard allocations, and the number of spot checks conducted in prior years, Coast Guard officials said they expect sectors to meet—and likely exceed—the spot-check requirements. At sectors we visited where additional staffing resources (temporary reservists and permanent staff) were in place, local officials generally agreed with this assessment. At a sector that did not receive additional permanent staff, however, officials said they were still determining how to meet the SAFE Port Act inspection requirements after temporary staff were gone. The Coast Guard identified deficiencies in about one-third of the facilities inspected in 2004-2006, with deficiencies concentrated in a subset of five deficiency categories, for example, failing to follow facility security plans for access control. Facilities with many or substantial deficiencies were relatively few in number, and deficiencies were identified during both annual exams and spot checks. The extent to which formal enforcement actions were used was limited nationally, but varies greatly among Coast Guard sectors. The majority of deficiencies were addressed by the Coast Guard informally, without formal enforcement actions. Thirty-six percent of the facilities that the Coast Guard documented as receiving an annual compliance exam or a spot check in 2006 had at least one reported deficiency, according to our analysis of information in MISLE. The previous 2 years were similar, with rates of 30 percent each year. These figures may not include security weaknesses that are corrected on the spot. Headquarters and sector officials told us that, in keeping with Coast Guard policy allowing the practice, inspectors may choose not to record such deficiencies. For example, a facility security officer at one oil facility said the Coast Guard gave him a verbal warning about the failure to display credentials at entrance gates and maintaining better documentation of security drills conducted at the facility. Similarly, the security officer at a gypsum facility said inspectors had suggested more creativity in crafting facility exercise scenarios (which the facility official said he would try to do) but inspectors had not recorded a deficiency. About 70 percent of the 2,500 reported deficiencies identified in 2006 occurred in five categories: access control (such as fences or gates needing repair), recordkeeping requirements, security for restricted areas (such as not posting required signs), drill and exercise requirements, and facility security plan amendment (for example failing to get approval for changing a security measure) and deficiencies related to the facility security plan or conducting a facility security audit. As figure 3 indicates, the two top categories, with over one-third of the deficiencies, were access control and facility recordkeeping requirements. Access and documentation were also the most common types of deficiencies at the sectors we visited. Table 1 provides examples of deficiencies in these two categories from the sectors we visited. As the examples illustrate, each category can include a variety of violations. Similar deficiencies were reported by officials at facilities we visited within the seven sectors. Examples included not constructing a new fence after a tornado; not screening vehicles, persons, and personal effects; leaving a gate unlocked; not completing exercise requirements; and lack of timeliness in documenting training. Our visits to facilities in the seven sectors also disclosed instances in which a regulated facility’s access controls would not prohibit access from a neighboring facility. We observed four instances in which a neighboring facility’s building or stacked-up materials would facilitate entry over a regulated facility’s perimeter fencing. Figure 4 shows one of those instances. After we pointed out these weaknesses to Coast Guard officials, they assured us that the weaknesses would be corrected. Coast Guard officials told us that any vulnerabilities introduced by neighboring facilities (whether the neighboring facility is a MTSA facility or not) should be identified in a facility’s vulnerability assessment, then addressed in a facility’s security plan. While about one-third of all facilities had at least one deficiency identified and recorded during an annual inspection or spot check, deficiencies in the seven sectors we visited tended to be concentrated in relatively few facilities. According to MISLE data, five or fewer facilities accounted for an average of 61 percent of deficiencies in six of the seven sectors we visited, and 10 or fewer facilities accounted for an average of 80 percent. One facility that receives passenger vessels in one sector we visited was cited for 12 deficiencies during its annual compliance exam. This facility’s deficiencies related primarily to (1) lack of knowledge about security procedures or equipment on the part of the security officer or other personnel and (2) failure to conduct or document security drills and exercises. Coast Guard officials at the sectors we visited said they thought security awareness and procedures had improved in the years since MTSA’s inception. Atlantic Area Coast Guard officials cited MTSA as making a difference in reducing cargo loss as increased security procedures lower theft rates. Officials cited qualitative changes such as the following: facilities taking more ownership of their own security and being more aware of security concerns, fewer trespassers on waterfront property and increasing security awareness among maritime workers, decrease in vandalism as a result of additional cameras in port areas more informed security personnel, and improved communication with facilities regarding break-ins. Our analysis of the top deficiencies included in the Coast Guard’s database showed that Coast Guard inspectors identified deficiencies both in spot checks and in annual exams, but spot checks tended to identify deficiencies related to access control and control over restricted areas. As table 2 shows, spot checks accounted for 44 percent of all recorded access control deficiencies and 19 percent of restricted area deficiencies, but no more than 9 percent of the other most common categories of deficiencies—drills, recordkeeping, and plan amendment/audits. This may occur because spot checks are sometimes conducted external to the facility and do not involve checking records, drills, or plans. We attempted to compare deficiencies identified during announced or unannounced annual compliance exams, but until July 2007, activities in the database were not required to indicate whether an exam was announced or unannounced. Headquarters officials acknowledged that there is variation in whether sectors conduct these exams announced or not, but could not provide information for all sectors that would allow a comparison. Furthermore, the Coast Guard has not assessed the effectiveness of each approach to establish whether one approach is more effective in identifying deficiencies. Inspectors told us they generally use Coast Guard guidance in deciding whether to issue some form of formal enforcement action, taking into consideration the facility’s deficiency history and the risk associated with the violation. Several Coast Guard sector officials said the Coast Guard prefers to work cooperatively with facilities to improve security procedures, instead of taking an adversarial or punitive approach. They said they often give facilities several weeks during which to fix a deficiency, instead of issuing an immediate enforcement action. Most often, a formal enforcement action, such as issuing a letter of warning, a notice of violation, or a civil penalty such as a fine, is not applied. Our analysis of MISLE data indicates that inspectors took one of these formal actions in about 11 percent of recorded deficiencies in 2004, 19 percent in 2005, and 16 percent in 2006. Table 3 shows what types of enforcement actions were recorded for the top five deficiencies in 2006 and a total for all deficiencies in 2006. Based on MISLE data, of the top five deficiencies, access control was most likely to result in an enforcement action. For this type of deficiency, formal action occurred 25 percent of the time. Our analysis of MISLE data shows sectors varied in the extent to which enforcement actions were taken. Coast Guard officials said that sector management is given discretion to use or not use enforcement actions as year 2006, the Coast Guard’s use of enforcement actions for the top five nationwide deficiencies in the sectors we visited. Even when the same deficiency is recorded, the sectors we visited vary greatly in whether or not they issued an enforcement action. For example, the first sector shown in the table took no enforcement actions, while the second sector used enforcement actions in each of the five deficiency categories. Our analysis could not determine the reasons for these differences, such as whether the variations reflect different circumstances faced by sectors, nor could Coast Guard officials explain the differences. The Coast Guard’s assessments of the number of inspectors needed to meet facility inspection requirements were based on limited data, and since these assessments were conducted, additional factors have arisen that could also affect the number of inspectors needed. The original assessment for meeting MTSA requirements and the subsequent assessment for meeting additional SAFE Port Act requirements were both estimates that were based on limited information, and the Coast Guard has not assessed their reliability. Moreover, our field visits identified two factors that could affect the estimates. One is that persons in inspector positions have other responsibilities that may compete with conducting inspections, so that the amount of time available for inspections may be less than expected. The Coast Guard does not have data on what portion of inspectors’ time is actually available for conducting inspections. The second factor is that recently issued guidance for conducting unannounced spot checks may require inspectors in some locations to spend more time conducting these spot checks than they had spent in the past. Coast Guard officials do not know what the effect of the new spot check requirements will be on resources needed. Although Coast Guard officials said the number of Coast Guard inspectors is adequate, their basis for determining the number of inspectors needed, both for the initial implementation of MTSA and to meet SAFE Port Act inspection requirements, was limited in several respects. When we reviewed the approach the Coast Guard used to project staff needed for meeting MTSA inspection requirements, we found the Coast Guard did not have a great deal of workload data to use in estimating the additional staff needed, nor did it have a system in place for determining how much time its personnel are spending on specific duties. The Coast Guard told us it established its estimates for the number of inspectors needed using working groups, panels, and available data, including information about resources in port security missions since the September 11, 2001, terrorist attacks. The estimates were also based on experience with environmental and safety inspections, but whether those types of inspections were analogous was unclear. Further, the Coast Guard could not provide documentation of the approach it used, limiting its ability to assess the adequacy of its decision. We determined that the Coast Guard had a basis for its estimate, but also that its approach stopped short of providing demonstrable evidence of its validity. The Coast Guard did not assess how reliable this estimate was in meeting inspection needs, but officials noted that sector officials could provide headquarters with feedback on their needs and request additional staff. The approach the Coast Guard used for estimating the number of additional inspectors needed to meet SAFE Port Act requirements had similar limitations. Coast Guard officials said they also used a general formula to request funding for personnel to conduct these additional inspections. They said they had limited time to prepare the request, and estimated the number needed based on past experience by looking at the number of inspections currently being conducted and the current number of inspectors, plus input from Coast Guard area officials. An additional 39 positions were added with resources stemming from DHS fiscal year 2007 appropriations. Other than field unit feedback, Coast Guard officials do not currently have a means for determining whether the deployment of staff to inspection positions is sufficient. In 2004 we recommended that the Coast Guard formally evaluate its facility inspection program to look at the adequacy of security inspection staffing, among other things; however, Coast Guard has not done so. Officials discussed using an existing management tool in combination with revised training requirements and staffing standards to be developed in the future as a way to measure the adequacy of staffing for specific mission areas, but as yet had no estimated date for completion of this effort. One factor that may affect the accuracy of the estimates is that inspectors are also responsible for a variety of other duties, and the extent to which these inspectors are available to conduct security inspections is unclear. Coast Guard data indicate that about 600 personnel have been qualified to conduct MTSA facility inspections. Officials said that as of August 2007 the Coast Guard had 389 MTSA positions, including the 39 new positions added with resources stemming from DHS fiscal year 2007 appropriations for unannounced spot checks, and, most of the positions were filled. Besides these personnel, a July 2007 Commandant message, indicated that Coast Guard districts were authorized to use reservists on a short-term basis to meet inspection requirements. In all, 52 reservist positions were authorized for this purpose. Our field visits showed that staff assigned to inspector positions were not necessarily working as inspectors, and those that were conducting inspections were also performing a number of other mission tasks as well. Data on the extent to which personnel in inspector positions are actually conducting facility inspections are not available. Coast Guard headquarters officials said it was difficult to know the extent to which an inspector was inspecting MTSA facilities because of the flexibility in how staff are used. Each sector, they said, determines what is needed for its workload. In all seven sectors we visited, staff in inspector positions were responsible for tasks other than facility inspections. Other tasks included responding to pollution incidents, supervising the handling of explosive cargo, monitoring the transfer of oil, conducting harbor patrols, boarding vessels, and conducting inspections of vessels or other matters, such as safety or environmental concerns (see fig. 5). At four of the seven sectors we visited, officials said meeting all mission requirements for which inspectors were responsible was or could be a challenge, especially after reservists made available for SAFE Port Act inspections were no longer available. Officials in one sector said they were meeting inspection requirements at the expense of other missions, such as inspecting containers or monitoring the transfer of oil. They said they make a risk-based judgment call on which activities to undertake. In another sector, officials said meeting inspection requirements in the long term would be difficult. The new inspection requirements effectively doubled the required number of facility inspections, and the sector has received only short-term assistance. Officials in another sector said available staffing could adequately cover only part of the sector’s area of responsibility. In another sector, officials said depending on the long-term workload, they may be seeking additional inspectors later this year, after temporary duty staff has left. A second factor that may affect the reliability of the estimates is that the Coast Guard based its estimate for the number of inspectors needed in part on the number of spot checks conducted in the past, but subsequent spot check guidance may require inspectors to spend more time on these spot checks than they had previously. After the SAFE Port Act’s passage, Coast Guard officials initially said they did not plan to issue specific guidance for spot checks, because developing a single inspection form that encompassed all situations was difficult and because they had not heard from Captains of the Port that such guidance was needed. In July 2007, however, the Coast Guard Commandant issued a message to Coast Guard Area officials that provided some spot check guidance. Among other things, this guidance: Defines minimum requirements for security spot checks—for example, specifying that the inspector must confirm that the facility is compliant with unique requirements for specific types of facilities (such as cruise ships) and must provide the facility with documentation of the inspection. Identifies activities that do not meet the requirements for a security spot check, such as inspections from a vehicle or checks conducted while performing certain shoreside patrols or facility visits related to vessel boardings (unless the minimum security spot check requirements are met during the patrols or boardings). Specified codes for documenting facility inspections in the MISLE database. Our discussions with sector officials indicated that prior to this guidance, sectors varied considerably in their interpretation of what constituted a security check. For example, one sector considered asking facility officials 15 to 30 minutes of knowledge-based questions as a spot check, while another considered a drive-by with a stop at the gate a type of spot check. Officials in several sectors mentioned that spot checks were conducted during other types of facility visits or missions, such as while escorting a boat, conducting a waterside patrol, or performing a vessel inspection. For documentation, one sector reported entering a record of all spot checks conducted, while several others qualified that “official” spot checks were logged—a drive by or dropping in to check on a few items might not be recorded. One sector said recording the check or not depended partly on whether a deficiency was identified during the spot check. The activities called for in this guidance have potential staffing implications. Based on our discussion with headquarters officials and inspectors in all sectors we visited, some of the activities that have been considered spot checks will no longer be considered adequate, such as observing facility security procedures from a vehicle while driving by. Meeting the spot check requirements under the new guidance may thus require more time from inspectors. This in turn may affect sector estimates of the level of resources needed to meet inspection requirements and Coast Guard goals for the number of inspections to be conducted. In Coast Guard comments on this draft, officials reported a total of 9,403 inspections (spot checks and annual exams) were conducted in 2007, exceeding their internal target of 8,800 inspections. This is an increase in inspections from prior years. Their comment however, did not indicate that each facility received a spot check and an annual exam. Further, since the spot check guidance was not issued until July of 2007, it is not clear how many of the spot checks were conducted following the new guidance. Without this information the implications for staffing are still uncertain. The Coast Guard has not assessed how its MTSA compliance inspection program is working. Our work across many types of federal programs shows that for program planning and performance management to be effective, federal managers need to use performance information to identify performance problems and look for solutions, develop approaches that improve results, and make other important management decisions. The Coast Guard’s ability to assess its compliance program is complicated by omissions, duplications, and other flaws in the data it would most likely use in measuring and evaluating the effectiveness of different monitoring and oversight approaches. In 2004, when we first examined the Coast Guard’s efforts to deal with MTSA requirements, we reported that development of a sound long-term strategy was a critical step in bringing about effective monitoring and oversight. Our work assessing such other areas as airport security and regulatory compliance had identified approaches for ensuring compliance and strengthening security. These approaches included such steps as unscheduled and unannounced inspections, and inspections on weekends or after normal working hours. At the time, local Coast Guard officials said that unscheduled inspections would be a positive component of a longer- term strategy because informing owners or operators of annual inspections can allow them to mask security problems by preparing for inspections in ways that do not represent the normal course of business. We recommended that, after the initial “surge” involved in reviewing security plans and conducting the first round of inspections, the Coast Guard should conduct a formal evaluation of its efforts and use the evaluation as a means to strengthen the compliance process for the longer term. In the 1990s, a statutory management framework for strengthening government performance and accountability was enacted into law. In particular, the Government Performance and Results Act (Results Act) calls for an increased reliance upon program performance information in assessing program efficiency and effectiveness. The Results Act notes that federal managers are seriously disadvantaged in their efforts to improve program efficiency and effectiveness because of insufficient articulation of program goals and inadequate information on program performance, and that spending decisions and program oversight are seriously handicapped by insufficient attention to program performance and results. Although the Results Act’s provisions apply primarily to tracking and reporting performance at the overall agency level, the same sound management principles apply to management of individual programs such as the facility compliance program. In other work, we have identified instances in which agencies can use performance information to improve programs and results. In many of its areas of activity, the Coast Guard has devoted extensive attention to providing sound data on its activities and analyzing what these data say about what the agency is accomplishing with the resources it expends. In 2006, for example, we reported that for many of its non- homeland security programs, the Coast Guard had developed performance measures that were generally sound and based on reliable data. Further, the Coast Guard was actively engaged in initiatives to help interpret these performance measures and use them to link resources to program results. The Coast Guard has not, however, applied this same approach to the facility compliance program. Although the Coast Guard agreed with our recommendation in 2004 that the agency formally evaluate its MTSA compliance inspection efforts and use the results as a means to strengthen its long-term strategy for ensuring facility compliance, it has not conducted such an evaluation, and has no current plans to do so. In comments submitted after reviewing a draft of this report, the Coast Guard indicated that facility security program metrics were discussed during a November 2007 workshop with field personnel. The comments also indicated that the Coast Guard is developing performance goals for monthly review by program management. We asked the Coast Guard to provide documentation of any systematic effort to assess implementation of its facility compliance program since July 2004, when the agency initiated the compliance phase of MTSA facility oversight. Headquarters officials told us that program managers use MISLE to see the results of inspectors’ data entries and to produce reports, but the Coast Guard’s only formal analysis of the overall success of MTSA implementation was contained in its Annual Report to Congress. The information the 2005 and 2006 reports provide, which includes figures on the number of enforcement actions and the approximate number of facility security inspections the Coast Guard conducted (included in the 2005 report only), does not include an analysis of the program’s operations or provide a basis to determine what, if anything, might be done to improve its operations. The program metrics and performance goals the Coast Guard indicated it is developing may provide data useful for future assessments. A more thorough evaluation of the facility compliance program could provide information on, for example, the variations we identified between Coast Guard units in oversight approaches, the advantages and disadvantages of each approach, and whether some approaches work better than the others. The Coast Guard has allowed Captains of the Port considerable discretion in implementing facility oversight program at the local level, in order to meet differences in local conditions. An evaluation could also explore the benefits of the variations that have resulted. For example, an evaluation could shed light on such issues as the following: Conducting annual compliance exams unannounced vs. scheduling them beforehand. Views we heard from different Coast Guard units varied on this issue. Coast Guard policy has encouraged the pre-scheduling of these exams, but some units have decided to conduct them on an unannounced basis because they believe doing so best captures what procedures are normally in place. At some units that scheduled the exams with the facility beforehand, however, Coast Guard officials said conducting exams unannounced would slow the process, because facility personnel would be less prepared with information and because officials with the needed information might be absent entirely. In such situations, delays might affect the unit’s ability to complete its inspection workload. An evaluation, done with accurate and sufficient data, could provide information of the effectiveness of various approaches. The type of enforcement action to take when deficiencies are identified. The available data indicate that Coast Guard units vary considerably in the extent to which they take formal enforcement actions, such as fines or written warnings. Headquarters officials told us that they could not explain the variation or its impact on continued facility compliance, but that units were allowed to determine actions taken based on the factors involved. These variations might occur for several reasons. Inspectors in sectors we visited told us they rely on Coast Guard guidance and take other factors into consideration, such as the nature of the deficiency, or history of the facility. They said that the decision on what enforcement action is taken depends in part on guidance from the sector’s Captain of the Port, and the judgment of the inspector as to the severity of the incident. For example, an inspector is given discretion to decide to issue a facility a fine or written warning at a high-volume port where the consequences for an incident are high, or to take no formal action because it is in a low-volume port where facilities are dispersed and the consequences are less severe. An evaluation, done with accurate and sufficient data, could analyze such differences as possible criteria for deciding when formal or informal actions are most appropriate. Variation in establishing the applicable MTSA regulation for a specific deficiency. We observed situations in which different inspectors cited different MTSA regulations for the same type of deficiency. For example, deficiencies in which security personnel lacked required training were classified in two different ways— sometimes as noncompliance with the regulation requiring security personnel to be knowledgeable of security-related areas, such as screening, and other times as noncompliance with regulations related to the security officer’s responsibilities. Similarly, failure to log a drill or exercise was sometimes categorized as noncompliance with regulations on drills and exercises and sometimes as a recordkeeping deficiency. An analysis of the differences would help managers determine if sectors have varying interpretations, if additional training is needed for facility inspectors regarding the applicability of the regulations, or if the regulations themselves could be improved. The Coast Guard plans to revise its MTSA regulations by 2009, and such an analysis could be instructive in that effort. We are not the only independent reviewer to point out the need for such an evaluation. In 2006, the Office of Management and Budget (OMB) issued an assessment of Coast Guard performance in meeting goals for the Ports, Waterways and Coastal Security program, which includes MTSA facility oversight. OMB noted that there have been no reviews indicating whether or how the program is achieving results. OMB emphasized the need for the Coast Guard to evaluate the effectiveness of its program, as well as to develop analytical methods and processes that provide routine and objective feedback to program managers. As we have reported in other work, performance information must meet users’ needs for completeness, accuracy, and consistency if it is to be useful. Other attributes that affect the usefulness of performance data include that measures be relevant, accessible, and of value to decisions made at various organizational levels. In MISLE, however, data and database fields were missing, duplicative, and inconsistent, with data entry a particular concern. Specific problems we identified include the following: Deficiency data may not be entered at all, or entered twice, officials said. For example, if a facility corrects a deficiency immediately, inspectors can decide not to include it in their report. On the other hand, Coast Guard data analysts acknowledged that there are duplicate deficiencies and enforcement actions in MISLE for example, resulting from the same deficiency being recorded at the sector and subunit levels, or lack of coordination in conducting an exam so that the activities are entered twice. Headquarters officials said that some units are unclear about what to enter into MISLE, and the biggest challenge to consistent and comprehensive data is proper data entry. Although inspectors choose from a standardized pick-list of enforcement action citations, the selection process is subjective and as we discussed earlier, a particular violation can fit under multiple citation categories. Headquarters officials said that the citation for a deficiency is not always provided when inspectors enter the activity into MISLE. Not entering this information means that the Coast Guard has difficulty showing data on the basis of specific MTSA regulatory deficiencies or specific enforcement actions. Coast Guard officials voiced varying opinions about whether the deficiency citation is a required field for inspectors to enter in MISLE, as well as about what MISLE fields to use to identify security-related deficiencies and enforcement actions. While the data themselves may pose problems, so too do the data fields into which the data are placed. Insufficient data fields in MISLE make it more difficult for the Coast Guard to conduct critical analyses. We identified two types of analysis that were limited—comparisons across sectors and analysis by year. Although the Coast Guard began reorganizing its field units into sectors in 2004 and made sectors the primary management unit, data continues to be entered into MISLE that cannot readily be presented by sector. This limitation makes assessing oversight performance, variability, and facility compliance by sector more difficult. The Coast Guard cannot report the number of facilities it regulated under MTSA during a particular period. Although MISLE contains a field to indicate whether a facility is currently regulated by MTSA, it does not have a field for the facility’s activation date. (Vessels regulated under MTSA do have an activation date.) Without it the Coast Guard cannot establish the number of facilities that have been regulated, and is unable to calculate a percentage of MTSA facilities that received the required annual compliance exam during a particular period. Coast Guard indicated that this is an area for improvement, but did not identify a specific remedy or time frame. Due to MISLE data limitations, we were not able to recreate annual report statistics provided to Congress on Coast Guard compliance activities. Furthermore, the annual reports did not provide a comprehensive picture of Coast Guard compliance activities. The Coast Guard and Maritime Transportation Act of 2004 mandated an annual report from the Coast Guard on the agency’s MTSA compliance-related activities, and so far the agency has issued two reports—one covering part of 2004 and much of 2005 (July 1, 2004 to November 17, 2005), the second covering all of 2006. According to Coast Guard officials, there is no set format for the report, and the type of information reported varies by reports. The report for 2004-2005, for example, includes information about the number of annual compliance exams conducted, while the report for 2006 does not. Coast Guard officials said they did not include information about the number of exams conducted in 2006 as part of an effort to reduce the annual report’s size. While figures were not provided in the annual report, the Coast Guard agreed that our analysis of MISLE correctly identified 2,126 annual exams recorded for 2006. Using three categories of information (annual exams, spot checks, and enforcement actions) that the Coast Guard reported for one or more of those years, we attempted to tie the numbers in the annual reports to the numbers in the MISLE database. Despite working extensively with Coast Guard personnel to resolve discrepancies, we were unable to fully verify the numbers reported in any of these categories. Figure 6 shows, for the annual compliance exam, the totals for 2004 and 2005 as stated in the annual report and the totals contained in MISLE. For 2004, the total shown in the annual report was about 500 more than the total supported in MISLE, and for 2005, the total shown in the annual report was about 179 less. Coast Guard officials who worked with us to resolve the discrepancies gave several possible reasons for differences: The totals in the annual report included a combination of MISLE data and other data reported by officials in field units. The annual report inspection data could have included some safety- related activities. Some of the information in MISLE may have changed between the time the Coast Guard used the database to prepare numbers for the annual report and the time the Coast Guard provided the data for us. We were not able to determine the extent, if any, to which these factors contributed to the discrepancies. The more significant issue, however, is not resolving the effect of these three factors, but rather recognizing the fundamental limitation reflected in being unable to reconcile differences between the numbers in the annual report with the numbers in the database. The ability to monitor and oversee a program is limited if officials cannot rely on the accuracy of the information they have at hand. At some sectors we visited, Coast Guard officials voiced similar concerns about having to rely on MISLE data for assessing trends. Inspectors in all seven sectors said they use MISLE to track compliance activities at individual facilities, but several reported that using MISLE to produce accurate aggregated information and trend data for the sector was more difficult. Inspectors in four sectors mentioned creating their own spreadsheets outside MISLE to more easily produce reports on administrative information (such as facility addresses and phone numbers), to check for MISLE report errors, and to track additional information not requested in MISLE. They indicated a variety of ways in which MISLE could be improved for use, including allowing MISLE to capture facility-specific security enhancements and weaknesses and linking MISLE data with information on security vulnerabilities captured by the maritime security risk assessment model. A second concern about the annual report compliance data is its limited scope that does not provide a complete picture of Coast Guard compliance activities or a relevant context for reviewing them. Annual compliance exams were not reported in 2006, and the number of deficiencies identified by Coast Guard oversight was not included in either the 2005 or 2006 report. Further, the total number of inspections that the Coast Guard conducted is not provided within the context of the total number of facilities regulated, and the number of spot checks is presented without the number of facilities that received the checks. As we pointed out earlier in this report, some of this information, such as the number of facilities subject to MTSA regulation, is not available in MISLE. To the degree that relevant information is not available or is difficult to extract, decision makers may not be able to see the Coast Guard’s activities in full or in context. The annual report’s presentation may also under-represent the Coast Guard’s actions in ensuring that facilities comply with security plans. The annual report presents enforcement actions issued, but does not report deficiencies identified. As we discussed earlier in this report, only 16 percent of deficiencies in 2006 resulted in enforcement actions. Since the Coast Guard prefers a strategy of working with facilities to improve facility compliance, rather than a punitive strategy, there are many facility deficiencies that are identified and corrected without an enforcement action, and therefore are not reported in the Annual Report. While enforcement actions generally represent the most severe instances of noncompliance, the extent of the Coast Guard’s activity in identifying deficiencies is not presented. The Coast Guard has acknowledged improvement is needed in MISLE compliance data and has taken initial steps to reduce some of the database concerns identified during the course of our review. Coast Guard officials at all levels we spoke to said problems introduced during data entry to MISLE were a concern. As we were conducting our review, the Coast Guard took some steps to improve the data. In July 2007, in a message to all units about implementing the SAFE Port Act maritime facility inspection requirements, the Commandant mentioned the issue of entering data into MISLE on a timely basis. The message states, “To minimize the need for frequent data calls and to ensure an accurate picture of Coast Guard facility inspection performance, sectors must ensure that MISLE data is entered promptly and that the activity, subactivity data, and AOR (area of responsibility) are accurate.” The message also details that inspection records should indicate whether annual exams or spot checks were performed on an announced or unannounced basis. During a 3-day Coast Guard workshop on MTSA and the Transportation Worker Identification Card held in November 2007, MISLE data entry and performance measures were discussed, according to an after action report of the workshop. No action items were detailed that related to changes in MTSA compliance data. These initial efforts may help to improve MISLE, but they do not address all of the concerns we identified. For example, Coast Guard area officials stated a need for more consistency in how data are entered across violations, noting that inspection dates are fine, but the violations are hard to categorize accurately, leading to the question of whether the data collected is accurate. The steps announced so far do not involve actions for resolving such inconsistencies. Further, as we pointed out, MISLE contains duplicate records, and information is not always complete. The Coast Guard’s initial steps do not include solutions to such problems. Since 2004, the Coast Guard has made progress in shifting the inspection program from one that emphasized putting security procedures in place to one that focuses on continued facility compliance with security procedures. Thus far, the Coast Guard’s estimates the number of inspectors has been and will be sufficient to meet inspection requirements, but the multiple roles of many inspectors and the new requirements for spot checks at all facilities could affect the reliability of these estimates. Coast Guard officials currently cannot document how much of inspectors’ time is spent on the facility enforcement program versus conducting other tasks. New spot check requirements may pose additional workload requirements, not only because spot checks must now be conducted of all facilities, but also because the Coast Guard’s recent guidance calls for placing an inspector inside the facility rather than just driving by. Plans for adding an additional 25 staff will help meet these needs, but without considering all factors, the Coast Guard is at additional risk of inspection requirements not being met. The Coast Guard gives considerable leeway to sectors and local units in deciding how to implement requirements, and as this report has shown, units have gone in somewhat different directions. For example, some have decided to conduct the annual compliance exam unannounced, while others announce them in advance, and some use formal enforcement actions such as written warnings or fines while others do not. The inspection program’s growing maturity heightens the importance of being able to determine what it is accomplishing and to assess alternative practices sectors have adopted to ensure facility compliance. Coast Guard headquarters, however, has not evaluated these various approaches to determine which ones produce greater results or yield greater efficiency. Finally, whether establishing that basic inspection requirements are being met, comparing the various approaches used in individual sectors, or evaluating other aspects of the facility compliance program, the Coast Guard is handicapped without complete and accurate compliance data. Coast Guard officials acknowledge these data problems, and initiated some improvements; however, efforts have not yet remedied all problems that have been identified. To help ensure that MTSA facility-related inspection requirements are being implemented effectively, we recommend that the Secretary of Homeland Security direct the Commandant of the Coast Guard to take the following three actions: Reassess the adequacy of resources for facility inspections, given changing inspection guidance and the multiple duties of sector personnel. Assess the effectiveness of differences in program implementation by sector to identify best practices, including the use of unannounced annual compliance exams and the varying use of enforcement actions. Assess MISLE compliance data, including the completeness of the data, data entry, consistency, and data field problems, and make any changes needed to more effectively utilize MISLE data. We requested comments on a draft of this report from the Secretary of DHS and from the Coast Guard. The Department declined to provide official written comments to include in our report. However, in an e-mail received January 23, 2008, the DHS liaison stated that DHS concurred with our recommendations. Written technical comments were provided by the Coast Guard that were incorporated into the report as appropriate. As we agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. We will then send copies to others who are interested and make copies available to others who request them. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-9610 or at [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. This report addresses the Coast Guard’s implementation of the Maritime Transportation Security Act of 2002 (MTSA) facility security requirements, as amended by, among other things, the Security and Accountability For Every Port Act (SAFE Port Act). Specifically, our objectives included determining the extent to which the Coast Guard: has met its maritime facility inspection requirements under MTSA and the SAFE Port Act and has found facilities to be in compliance with their security plans, has determined the availability of trained personnel to meet current and future facility inspection requirements, and has assessed the effectiveness of its MTSA facility oversight program and ensured that program compliance data collected and reported are reliable. To determine whether the Coast Guard has met its inspection requirements and has found facilities to be in compliance with their security plans, we analyzed 2004–2006 compliance activity data from the Coast Guard’s Marine Information for Safety and Law Enforcement (MISLE) database. Over a period of 5 months, we requested and obtained data from MISLE to document Coast Guard compliance and enforcement activities related to MTSA facilities from July 1, 2004, the deadline for facilities to be operating under a Coast Guard-approved facility security plan, to December 31, 2006. The Coast Guard extracted three types of data and provided them as data spreadsheets, including: Inspections: Annual Compliance Exams, Security Spot Checks, and Facility Exercise Monitoring at specific MTSA facilities. Deficiencies: the number and nature of deficiencies recorded during the inspections. Enforcement Actions: sanctions and remedial actions directed by the Coast Guard for incurring deficiencies. To assess the reliability of MISLE data, we (1) performed electronic testing for obvious errors in accuracy and completeness; (2) reviewed related documentation, such as MISLE user guides; and (3) held extensive meetings and exchanged correspondence with Coast Guard information systems officials to discuss data entry and analysis and ensure correct identification of specific data fields regarding the data. When we found discrepancies, we brought these to the Coast Guard’s attention and worked with agency officials to correct them to the extent possible before conducting our analyses. Given the discrepancies we identified, we took several steps prior to our analysis to improve the accuracy and usefulness of the data the Coast Guard supplied. These included: Removing 77 records from facility deficiencies that were “opened in error,” which Coast Guard indicated generally were duplicate records. Creating a dataset linking deficiencies and enforcement actions. We performed several checks on the merged file and worked with the Coast Guard to reduce data inconsistencies. Creating a new “Sector” field based on Coast Guard documentation and interviews on the new sector breakdowns, and for 2006 consolidated the existing “Unit” field into the appropriate sector. Coast Guard data analysts acknowledged that there are duplicate deficiencies and enforcement actions in MISLE and that MISLE has no automated process to accurately determine which duplicate activity to remove—the process would involve looking at individual narratives to attempt to determine which activity was a duplicate. We used the following approach to identify duplicates: when we identified activities that had the same deficiency identification number and citation, we checked 21 other data fields in MISLE for duplication. If two or more observations had the same values in all of these fields, we retained one observation, designating the others as duplicates. Using this process, we classified 32 of 7,620 total observations, or less than 1 percent of deficiencies in each year, as duplicative. We chose to keep the observations in the analyses because it was not clear which activity to delete because we lacked a more reliable means for identifying duplicates that were not identical for all fields examined, and because of the small number of observations our approach identified. After conducting the above steps, we determined that the data were sufficiently reliable to provide a general indication of the magnitude and relative frequencies of compliance activities. The corrected data sets were used to analyze national and sector-based Coast Guard MTSA compliance activities, including inspections, deficiencies, and enforcement actions. Our report discusses MISLE data problems in more detail, along with the steps we believe are needed to address them. To supplement our analysis of MISLE data in understanding the Coast Guard’s progress on inspection requirements, we selected 7 of the Coast Guard’s 35 sectors for more detailed review. We selected sectors that would provide a range of Coast Guard environments in which MTSA is being implemented, and to ensure a broad representation of types of ports, we chose sectors with ports that varied in size, varied in types of waterway (ocean, river, and lake), and geographic diversity. While results from these seven sectors cannot be generalized to all Coast Guard sectors, we determined that the selection of these sites was appropriate for our design and objectives and that the selection would provide valid and reliable evidence. In each sector, we interviewed Coast Guard inspectors responsible for oversight of MTSA facility plans, facility security officers at MTSA facilities (28 facilities overall), and other port stakeholders in each port, such as port authority personnel and facilities adjacent to MTSA facilities. Sectors we visited included Hampton Roads, Virginia; Honolulu, Hawaii; Lake Michigan, Michigan; Los Angeles/Long Beach, California; New York/New Jersey; Seattle, Washington; and Upper Mississippi River, Missouri. We conducted our visits—as well as some follow-up discussions by phone—from December 2006 through August 2007. We also met with the Coast Guard Atlantic and Pacific area officials to discuss compliance activities, and with headquarters program and information system officials multiple times to discuss our analysis. We reviewed relevant sections of the Maritime Transportation Security Act, the SAFE Port Act, Coast Guard implementing regulations, Navigation and Vessel Inspection circulars, prior GAO reports, and MISLE documentation. To establish whether the Coast Guard has determined the availability of trained personnel to meet current and future facility inspection requirements, we summarized data provided by the Coast Guard from its Direct Access database on the number of personnel trained to conduct MTSA inspections. Direct Access is the Coast Guard’s Human Resource system, used for a variety of personnel functions. The Coast Guard provided a spreadsheet of personnel certified with one or more Maritime Security Qualifications from this database. To assess the reliability of the spreadsheet data, we looked for obvious errors and inconsistencies in the data, and requested information from Coast Guard officials to understand limitations in the data and make corrections where possible. We identified limitations in the data related to duplicate entries and certifications not yet entered into the system. Duplicate entries result, for example, because staff may be listed twice if they are employed as both a reservist and civilian Coast Guard employee, or may be listed under a sector and under a pre-sector unit. We deleted duplicate entries identified by Coast Guard to arrive at the number of trained personnel, but we were unable to determine how many certifications had not yet been entered in the system. Given this limitation, we found the Direct Access data to be sufficiently reliable to provide only an approximate number of personnel qualified to conduct MTSA facility inspections. The Coast Guard provided verbal information on the number of personnel currently in facility inspection positions. We conducted several interviews with relevant Coast Guard headquarters managers regarding the number of inspectors that have been trained, the allocation of staff to inspection positions, the training provided to current inspectors, and plans for future training and resources for conducting facility inspections. We also discussed current and planned guidance for conducting facility inspections with headquarters officials. In the seven sectors we visited, we met with facility inspectors to discuss facility inspector training, the adequacy of inspection resources, guidance used to conduct inspections, and other inspector responsibilities. We discussed the consistency of inspections with facility security officers in facilities located in the seven sectors. We also reviewed written Coast Guard guidance related to MTSA facility inspections, such as relevant circulars, memos, and on-line resources, and documents on planned revisions to facility oversight regulations. To determine the extent to which the Coast Guard has assessed its MTSA facility oversight program and ensured that program compliance data is accurate, we requested the Coast Guard provide documentation of any evaluation of activities related to facility oversight and reviewed the two annual reports that the Coast Guard provided. We reviewed Office of Management and Budget documents and prior GAO reports on assessing program effectiveness. Our assessment of the accuracy of the Coast Guard compliance data was based on our reliability assessment of MISLE data we conducted as part of objective 1. We also discussed the accuracy and utility of MISLE data with facility inspectors during our site visits to seven sectors. We conducted this performance audit from May 2006 through February 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix summarizes GAO’s analysis of deficiencies identified by Coast Guard facility inspectors nationwide from 2004–2006 based on the MTSA regulatory citation associated with each deficiency. Facility security plans are written to meet requirements established by MTSA regulations, and the deficiency documentation in the Coast Guard’s compliance data includes the citation for the associated MTSA regulation. Under a specific citation, in most cases there are a number of sub- elements. We summarized the deficiency data at the general citation level because the data collected on facility compliance did not consistently identify deficiencies at a more detailed level. The data in table 5 is presented based on the frequency the of the deficiency citation for 2006. This report was completed under the direction of Steven Calvo, Assistant Director. Other key contributors included Geoffrey Hamilton, Dawn Hoff, Monica Kelly, Dan Klabunde, Rebecca Taylor, Jerome Sandau, and Stan Stenersen. Maritime Security: Federal Efforts Needed to Address Challenges in Preventing and Responding to Terrorist Attacks on Energy Commodity Tankers. GAO-08-141. Washington, D.C.: Dec. 10, 2007. Homeland Security: TSA Has Made Progress in Implementing the Transportation Worker Identification Credential Program, but Challenges Remain. GAO-08-133T. Washington, D.C.: Oct. 31, 2007. Maritime Security: The SAFE Port Act: Status and Implementation One Year Later. GAO-08-126T. Washington, D.C.: Oct. 30, 2007. Department of Homeland Security: Progress Report on Implementation of Mission and Management Functions. GAO-07-454. Washington, D.C.: Aug. 17, 2007. Maritime Security: Information on Port Security in the Caribbean Basin. GAO-07-804R. Washington, D.C.: June 29, 2007. Coast Guard: Observations on the Fiscal Year 2008 Budget, Performance, Reorganization, and Related Challenges. GAO-07-489T. Washington, D.C.: Apr. 18, 2007. International Trade: Persistent Weaknesses in the In-Bond Cargo System Impede Customs and Border Protection's Ability to Address Revenue, Trade, and Security Concerns. GAO-07-561. Washington, D.C.: Apr. 17, 2007. Port Risk Management: Additional Federal Guidance Would Aid Ports in Disaster Planning and Recovery. GAO-07-412. Washington, D.C.: Mar. 28, 2007. Maritime Security: Public Safety Consequences of a Terrorist Attack on a Tanker Carrying Liquefied Natural Gas Need Clarification. GAO-07- 316. Washington, D.C.: Feb. 23, 2007. Transportation Security: DHS Should Address Key Challenges before Implementing the Transportation Worker Identification Credential Program. GAO-06-982. Washington, D.C.: Sept. 29, 2006. Coast Guard: Observations on the Preparation, Response, and Recovery Missions Related to Hurricane Katrina. GAO-06-903. Washington, D.C.: July 31, 2006. Maritime Security: Information Sharing Efforts Are Improving. GAO-06- 933T. Washington, D.C.: July 10, 2006. Coast Guard: Observations on Agency Performance, Operations, and Future Challenges. GAO-06-448T. Washington, D.C.: June 15, 2006. Cargo Container Inspections: Preliminary Observations on the Status of Efforts to Improve the Automated Targeting System. GAO-06-591T. Washington, D.C.: Mar. 30, 2006. Combating Nuclear Smuggling: DHS Has Made Progress Deploying Radiation Detection Equipment at U.S. Ports-of-Entry, but Concerns Remain. GAO-06-389. Washington, D.C.: Mar. 22, 2006. Risk Management: Further Refinements Needed to Assess Risks and Prioritize Protective Measures at Ports and Other Critical Infrastructure. GAO-06-91. Washington, D.C.: Dec. 2005. Combating Nuclear Smuggling: Efforts to Deploy Radiation Detection Equipment in the United States and in Other Countries. GAO-05-840T. Washington, D.C.: June 21, 2005. Homeland Security: Key Cargo Security Programs Can Be Improved. GAO-05-466T. Washington, D.C.: May 26, 2005. Maritime Security: Enhancements Made, but Implementation and Sustainability Remain Key Challenges. GAO-05-448T. Washington, D.C.: May 17, 2005. Container Security: A Flexible Staffing Model and Minimum Equipment Requirements Would Improve Overseas Targeting and Inspection Efforts. GAO-05-557. Washington, D.C.: Apr. 26, 2005. Maritime Security: New Structures Have Improved Information Sharing, but Security Clearance Processing Requires Further Attention. GAO-05-394. Washington, D.C.: Apr. 15, 2005. Preventing Nuclear Smuggling: DOE Has Made Limited Progress in Installing Radiation Detection Equipment at Highest Priority Foreign Seaports. GAO-05-375. Washington, D.C.: Mar. 31, 2005. Coast Guard: Observations on Agency Priorities in Fiscal Year 2006 Budget Request. GAO-05-364T. Washington, D.C.: Mar. 17, 2005. Cargo Security: Partnership Program Grants Importers Reduced Scrutiny with Limited Assurance of Improved Security. GAO-05-404. Washington, D.C.: Mar. 11, 2005. Coast Guard: Station Readiness Improving, but Resource Challenges and Management Concerns Remain. GAO-05-161. Washington, D.C.: Jan. 31, 2005. Homeland Security: Process for Reporting Lessons Learned from Seaport Exercises Needs Further Attention. GAO-05-170. Washington, D.C.: Jan.14, 2005. Maritime Security: Better Planning Needed to Help Ensure an Effective Port Security Assessment Program. GAO-04-1062. Washington, D.C.: Sept. 30, 2004. Maritime Security: Partnering Could Reduce Federal Costs and Facilitate Implementation of Automatic Vessel Identification System. GAO-04-868. Washington, D.C.: July 23, 2004. Maritime Security: Substantial Work Remains to Translate New Planning Requirements into Effective Port Security. GAO-04-838. Washington, D.C.: June 30, 2004. Coast Guard: Key Management and Budget Challenges for Fiscal Year 2005 and Beyond. GAO-04-636T. Washington, D.C.: Apr. 7, 2004. Homeland Security: Summary of Challenges Faced in Targeting Oceangoing Cargo Containers for Inspection. GAO-04-557T. Washington, D.C.: Mar. 31, 2004. Maritime Security: Progress Made in Implementing Maritime Transportation Security Act, but Concerns Remain. GAO-03-1155T. Washington, D.C.: Sept. 9, 2003. Combating Terrorism: Interagency Framework and Agency Programs to Address the Overseas Threat. GAO-03-165. Washington, D.C.: May 23, 2003. Combating Terrorism: Actions Needed to Improve Force Protection for DOD Deployments through Domestic Seaports. GAO-03-15. Washington, D.C.: Oct. 22, 2002. Coast Guard: Vessel Identification System Development Needs to Be Reassessed. GAO-02-477. Washington, D.C.: May 24, 2002. Coast Guard: Budget and Management Challenges for 2003 and Beyond. GAO-02-538T. Washington, D.C.: Mar. 19, 2002. | To help secure the nation's ports against a terrorist attack, federal regulations have required cargo and other maritime facilities to have security plans in place since July 2004. U.S Coast Guard (USCG) guidance calls for an annual inspection to ensure that plans are being followed. Federal law enacted in October 2006 required such facilities to be inspected two times a year--one of which is to be conducted unannounced. The USCG plans to conduct one announced inspection and the other as a less comprehensive unannounced "spot check." GAO examined the extent to which the USCG (1) has met inspection requirements and found facilities to be complying with their plans, (2) has determined the availability of trained personnel to meet current and future facility inspection requirements, and (3) has assessed the effectiveness of its facility inspection program and ensured that program compliance data collected and reported are reliable. GAO analyzed USCG compliance data, interviewed inspectors and other stakeholders in 7 of 35 USCG sectors of varying size, geographic location, and type of waterway. We could not determine the extent to which the USCG has met inspection requirements because its compliance database does not identify all regulated facilities to establish how many should have been inspected. While the USCG estimates there are about 3,200 facilities requiring inspection, their records indicate 2,126 annual inspections were conducted in 2006. Headquarters officials said field units reported that all required facility inspections were conducted. However, officials also said some inspections may not have been recorded, or were delayed by staff being diverted for natural disasters. The USCG identified deficiencies in about one-third of inspections, mainly for problems with access controls or missing documentation. Over 80 percent of deficiencies identified by the USCG were resolved by facility operators without the USCG applying formal enforcement actions. Although USCG officials believe they have enough trained inspectors to conduct current and future inspections, two additional factors could affect the USCG's estimates of the number of inspectors needed. First, facility inspectors balance security inspections with other competing duties, such as safety or pollution checks, and giving priority to security inspections could affect these other duties, inspectors said. Second, new guidance for spot checks calls for these checks to be more detailed--and perhaps more time-consuming--than some USCG units conducted in the past. For example, the guidance now requires an on-site visit, whereas some units had allowed the check to be a drive-by observation. The effect of the new guidance on resource requirements in these units is unknown. The USCG has not assessed the effectiveness of its facility inspection program. Headquarters guidance gives considerable discretion to local USCG units in deciding how to conduct facility inspections--for example, deciding whether a fine is warranted. The USCG has little or no information, however, on which approaches work better than others and is therefore limited in being able to make informed decisions in guiding the program. Flaws in USCG's database, including missing, duplicate, and inconsistent information, complicate the USCG's ability to conduct such analyses or provide other information for making management decisions. |
Over 1 million American Indians and Alaskan Natives are eligible for federally funded health care. The Indian Health Service (IHS), an agency of the U.S. Public Health Service, Department of Health and Human Services (HHS), serves as the principal federal agency for providing health care services to this population. IHS’ goal is to raise the health status of American Indians and Alaskan Natives to the highest possible level. This is to be accomplished primarily through direct delivery of health care services and assisting tribes in developing and operating their own health care programs. In fiscal year 1994, IHS operated with a budget of about $1.9 billion and was authorized 15,441 full-time equivalent employee positions. Of the total number of positions, about 60 percent or 9,400 are directly involved in the delivery of health care. This includes about 5,500 health care professionals, such as physicians, nurses, and physical therapists. The remaining 40 percent is composed of administrative, technical, and management employees, some of whom administer the contract health services program. Administratively, IHS is organized into 12 area offices with headquarters in Rockville, Maryland (see app. I). The area offices are responsible for overseeing the delivery of health care services to American Indians and Alaskan Natives by the 144 service units. The service units are responsible for providing health care. IHS provides direct health care services at no cost to eligible American Indians and Alaskan Natives in 41 hospitals and 114 outpatient facilities. Tribes and tribal groups operate another 8 hospitals and 351 outpatient facilities funded by IHS. IHS and tribally operated hospitals are generally small, with 80 percent of them having 50 or fewer beds. IHS’ three largest hospitals are in Phoenix, Arizona; Gallup, New Mexico; and Anchorage, Alaska. The type and scope of direct health care services vary by facility and depend on the availability of staff, equipment, and financial resources. Most IHS and tribal hospitals do not provide nonprimary care services, such as cardiology, ophthalmology, and orthopedics. In fiscal year 1993, IHS facilities had a workload of over 69,000 inpatient admissions and 5.5 million outpatient visits. Health care services that IHS cannot provide in its hospitals and outpatient facilities are purchased from the private sector through the contract health services program. In fiscal year 1993, the Congress appropriated $328 million for this program and in fiscal year 1994, $350 million. These funds are used to obtain care from non-IHS hospitals and providers for (1) patients needing medical services beyond the scope and capability of IHS hospitals and clinics or in emergency situations and (2) American Indians and Alaskan Natives living in IHS areas that do not have direct care medical services. To receive such funding, an individual must (1) be eligible for direct care from the IHS, (2) reside within a designated contract health services delivery area, and (3) be either a member of, or have close social and economic ties with, the tribe located on the reservation. However, some IHS areas, such as California and Portland (which covers Oregon, Washington, and Idaho), do not have any IHS hospitals and refer all American Indians and Alaskan Natives for all inpatient services to non-IHS facilities. Contract health services funds are used to purchase medical services based on a priority system and specific authorization guidelines established by headquarters. The Congress annually appropriates funds for these services as a separate category within the IHS clinical services budget. IHS distributes these funds to area offices primarily based on past funding history. The area offices then distribute the funds to the service units. IHS has historically had difficulty recruiting and retaining physicians to staff its hospitals and outpatient facilities. To compensate for physician shortages, IHS often contracts with companies that supply locum tenens physicians, who are temporary physicians hired to fill vacancies for a specific period of time. In addition, these physicians temporarily replace staff who are in training, sick, or on vacation. For 9 months of fiscal year 1993, IHS estimated that this service cost $16.4 million. As U.S. citizens, American Indians and Alaskan Natives are eligible to participate in Medicare and Medicaid on the same basis as any other citizen. In fiscal year 1993, third-party sources reimbursed IHS service units for more than $145 million for direct care services provided to this population. IHS policy requires that third-party payers be used before it will assume responsibility for payment of services rendered by non-IHS providers. Thus, American Indians and Alaskan Natives who receive health care under the contract health services program and who are eligible for Medicaid, Medicare, or have private insurance must first use these resources to pay for their medical care. IHS will assume responsibility, as funding permits, for any remaining balance for the care received. Health care services provided by IHS to American Indians and Alaskan Natives are not a federal health care entitlement. Rather, the Health Care Improvement Act (25 U.S.C. 1602), which authorizes IHS to provide health care services to American Indians and Alaskan Natives, depends on appropriations from the Congress. Thus, IHS provides health care services only to the extent that funds and resources are made available. American Indian and Alaskan Native leaders have consistently maintained that health care is part of the trust obligation the United States has with the Indian people and that IHS is responsible for providing for all of the health care needs of this population. Tribal leaders do not believe that IHS is providing this level of service and in 1994, during hearings on health care reform, brought this issue before the Congress. In those hearings, tribal leaders stated that they want assurance that their members will receive basic and adequate health care coverage. These leaders also said that if the health care problems of American Indians and Alaskan Natives are not addressed in their early stages of development, the result will be an increase in serious illnesses. The health status of this population is worse than that of the general population. For example, the death rate from tuberculosis for American Indians and Alaskan Natives is six times higher than for other Americans and three times higher for diabetes. Furthermore, diabetes is now so prevalent that in many tribes 20 percent of the members have the disease. Diabetes can cause other medical problems, such as (1) eye complications that can lead to blindness, (2) kidney problems that may require dialysis or a kidney transplant to sustain life, and (3) vascular problems that can lead to amputation of a leg. However, these complications can be delayed or prevented with early diagnosis and appropriate treatment, usually by a specialist. Concerned about American Indians’ and Alaskan Natives’ access to health care and the quality of the medical services they receive, the Ranking Minority Member of the Human Resources and Intergovernmental Relations Subcommittee, House Committee on Government Reform and Oversight, asked us in April 1993 to review the quality of medical care received. In subsequent discussions with subcommittee staff, we agreed to focus our review on two areas: IHS’ efforts to ensure that temporary physicians working in IHS facilities are qualified and competent to perform assigned duties, and what happens when requested medical services are delayed. We performed work at IHS headquarters, the Oklahoma area office, and at Ada and Claremore, Oklahoma, IHS hospitals. We selected these sites because both hospitals indicated they had problems with temporary physicians. We selected the following companies for our review because they had contracts with the hospitals we visited: Harris, Kovacs, Alderman Locum Tenens, Inc., Atlanta, Georgia; Medical Doctor Associates, Norcross, Georgia; Jackson and Coker Locum Tenens, Inc., Atlanta, Georgia; and EmCare, Dallas, Texas. To identify IHS facilities that used temporary physicians and determine the cost of their services, we surveyed IHS facilities (see app. II). To address the issue of how IHS ensures that temporary physicians working in IHS facilities are qualified and competent to perform the work assigned to them, we (1) reviewed IHS’ policies and procedures for credentialing and privileging temporary physicians, (2) obtained and analyzed fiscal year 1993 contracts that IHS facilities had with locum tenens companies, (3) reviewed the credentials files of temporary physicians at two IHS hospitals, and (4) interviewed officials at four locum tenens companies that IHS had contracts with and discussed each company’s policies and procedures for credentialing physicians. At the hospitals we visited, we reviewed minutes of 1993 meetings of quality assurance committees to determine whether the quality of care being provided by temporary physicians was ever questioned. When we identified problems, we reviewed the medical records of the patients involved and discussed the care with IHS staff physicians. We also interviewed an official of the Federation of State Medical Boards (FSMB) to discuss dissemination of physician performance and disciplinary information obtained from FSMB’s data bank. To determine what happened to patients who did not receive health care services at the time they were requested, we reviewed a list prepared by the hospitals of all denials and deferrals for fiscal year 1993 at the two hospitals we visited. From this list, we selected 20 files and tracked whether these patients eventually received care from either IHS or elsewhere and interviewed the IHS and non-IHS clinicians who provided this care. We also interviewed tribal leaders and health advocates from the Chickasaw and Choctaw Nations and the Sisseton-Wahpeton Sioux and Oglala Sioux Tribes; and interviewed Oklahoma, Navajo, and Aberdeen area office staff and non-IHS health care providers. We reviewed and analyzed documents related to their contract health services budgets, eligibility requirements for receipt of care, medical priorities for funding, and program operations. At IHS headquarters, we analyzed contract health services management reviews of selected area and service unit programs and interviewed IHS officials who were knowledgeable about the program. We performed our work between April 1993 and October 1994 in accordance with generally accepted government auditing standards. IHS has a difficult time retaining enough qualified physicians. To help meet the constant need for physicians to fill vacancies at various facilities and to supplement current medical staff, IHS service units enter into contracts with private companies that supply temporary physicians, known as locum tenens physicians, who provide services in IHS facilities. However, neither IHS’ policy nor most of its service units’ contracts with locum tenens companies explicitly requires that an examination be done of all medical licenses that a temporary physician may have before deciding whether the physician is allowed to treat IHS patients. Furthermore, the contracts do not require that locum tenens companies provide IHS with all information they may have on all licenses a physician may hold. Instead, IHS requires only that a physician have a medical license without restrictions to practice medicine. Furthermore, IHS’ own credentialing review process for temporary physicians is often not done in a timely manner. As a result, IHS has unknowingly allowed physicians with performance problems or disciplinary actions taken or pending against their licenses for offenses such as gross and repeated malpractice and unprofessional misconduct to work in IHS hospitals and treat patients. At the two IHS hospitals we visited, we found that 7 of the 50 temporary physicians referred to IHS had prior histories of performance or disciplinary problems. In some cases, IHS officials did not know of these problems when the hospital accepted the physician for work because of incomplete credentials information. IHS does not have a formal system to help its facilities share information on the performance of temporary physicians. At one hospital, IHS officials concluded that a temporary physician misdiagnosed and inappropriately treated a patient, which may have contributed to the patient’s death. The IHS facility notified the locum tenens company of the incident and told them that it did not want further services from this physician. However, the IHS facility took no action to alert other IHS facilities. IHS estimated that for 9 months of fiscal year 1993, it spent about $16.4 million on contracts with locum tenens companies. We estimated that during fiscal year 1993 IHS obtained the services of more than 300 temporary physicians working in such areas as family practice, internal medicine, emergency room care, pediatrics, and obstetrics and gynecology. These physicians were needed because of vacancies and short-term absences of physicians who were on vacation, in training, or sick. While facilities in each of IHS’ 12 area offices use temporary physicians, 5 areas—Oklahoma, Aberdeen, Navajo, Phoenix, and Alaska—accounted for most of the funds expended for temporary physicians’ services. Collectively, the 5 areas serve about 67 percent of IHS’ user population and, as table 2.1 shows, accounted for 84 percent of the $16.4 million spent during fiscal year 1993 on temporary physicians’ services as of July 1993. IHS facilities generally do not include a requirement in their contracts with locum tenens companies to (1) verify all licenses that a physician may hold, (2) inform IHS of the status of all licenses, and (3) provide all performance and disciplinary data that they may have on a temporary physician. Furthermore, IHS’ credentials and privileges policy requires only that a physician have an active state medical license with no restrictions to practice medicine. As a result, IHS does not always obtain complete credentials information and is not always aware of temporary physicians with performance or disciplinary problems. At the two locations we visited, 5 locum tenens companies provided 50 temporary physicians in fiscal year 1993. We reviewed the credential files of 21 of these physicians and found that 7 had prior performance or disciplinary problems. This information had not been provided to the IHS facility that had contracted for each physician’s services. IHS officials at these locations told us that they did not specifically request the companies to provide all available data because they were under the impression that the contracts with locum tenens companies require disclosure of performance and disciplinary information. IHS contracts with locum tenens companies generally specify the length of time physician services are required; the type of specialty needed, such as emergency room physician; the diagnostic and procedural skills needed; and the minimum professional qualifications that a physician must meet. To determine whether a physician meets the minimum qualifications, contract terms also require that the locum tenens companies submit the following credentialing information to an IHS facility: (1) evidence that the physician has a medical degree, (2) a copy of the physician’s current medical license, (3) evidence of liability insurance, (4) a signed IHS application for appointment to the medical staff, (5) a request for clinical privileges, and (6) a statement of health. Other minimum qualifications vary by IHS facility and by the type of specialty requested. Locum tenens company officials told us that they will perform whatever physician verification of professional qualifications and requirements are necessary to meet the terms of the contract. But most IHS contracts do not (1) contain explicit requirements that locum tenens companies obtain and disclose information on actions taken against any medical licenses held by a physician or (2) require that locum tenens companies obtain and provide IHS with any information on ongoing or pending investigations involving temporary physicians. Three of the four locum tenens companies we visited routinely use FSMB’s disciplinary data bank to determine if any information has been reported on a physician’s performance. The FSMB data bank provides historical information from all state medical licensing boards about whether a physician’s medical license has action taken against it and the nature and date of the action. However, the FSMB data bank does not contain information on ongoing or pending investigations against a medical license. This information must be obtained from the individual state medical licensing boards, which all the locum tenens companies we visited contact to verify medical licenses. Locum tenens companies query the FSMB data bank electronically and often receive results in a day. Thus, they quickly become aware of any performance problems that a temporary physician had in the past. However, the FSMB contract with locum tenens companies precludes the companies from providing detailed information on a physician’s performance to a third party, such as IHS. A company can, however, inform a third party that a physician had a performance or disciplinary action taken against a medical license. Thus, IHS can obtain an indication that a performance problem may exist if it asks for such information from a contractor. One of the IHS facilities that we visited does obtain this information. Because of prior problems that this facility encountered with temporary physicians and locum tenens companies, it contractually requires locum tenens companies to query FSMB and inform it as to whether a physician had performance or disciplinary action taken against a license. Because temporary physicians do not always disclose complete information on their past performance, IHS officials at this facility believe that it is especially critical that they check the status of each medical license. The following example shows the importance of checking all medical licenses that a physician may have. At one IHS facility, a temporary physician worked as an internist from June 21 to July 15, 1993. The locum tenens company provided the hospital with a curriculum vitae on June 17, 1993. The physician’s application for appointment to the medical staff at the IHS facility indicated that the physician was licensed to practice medicine in three states and that the physician was never censured or reprimanded by a licensing board. The locum tenens company provided copies of two state medical licenses. On June 21, 1993, the IHS credentialing official called the licensing board in one of these two states and learned that the physician’s license was in good standing. Upon further review of the physician’s curriculum vitae, the credentialing official noticed that the physician had practiced for 15 years in one of the three states where he was licensed. However, neither the company nor the physician had provided IHS with a copy of this license. The credentialing official contacted that state’s medical licensing board on July 14, 1993, and learned that the physician had two actions taken against this license in April 1992. According to the state licensing board’s report, the physician was fined $3,000 and ordered to attend 50 hours of continuing medical education for failure to keep written medical records justifying the course of treatment of a patient, altering medical records, and failing to practice medicine with an acceptable level of care, skill, and treatment in properly diagnosing a patient’s heart condition. The physician left the IHS facility after his contractual obligation ended on July 15, 1993. The Joint Commission on the Accreditation of Healthcare Organizations (JCAHO) requires all entities that seek accreditation to perform a credentialing review on each physician it employs. This requirement is designed to protect a patient from being treated by an unqualified or incompetent physician. IHS follows JCAHO’s accrediting requirements and requires each of its facilities to conduct a credentials review that consists of (1) verifying with a state medical licensing board that a physician has an active, unrestricted medical license; (2) verifying training with the medical school, internship, or residency program, and professional affiliations, such as board certification; (3) obtaining information to evaluate the physician’s suitability for appointment to the medical staff, such as explanations of past performance problems, disciplinary actions taken against a physician’s license, or malpractice suits that involved the physician; (4) checking with references to verify clinical competence, judgment, character, and ability to get along with people; and (5) obtaining information on physical and mental health status. IHS procedures also require that a physician’s credentials be verified before the physician is allowed to provide medical services to a patient. However, if time does not permit a full credentialing review before a physician reports for duty, an IHS facility director can grant temporary privileges to practice medicine. The decision to grant temporary privileges to a physician is based on the clinical director’s review and approval of the physician’s application for appointment to the medical staff and his or her request for clinical privileges. But the credentialing official is still expected to perform a full review of a physician’s credentials. IHS credentialing officials told us that sometimes they cannot conduct a full credentials review before a temporary physician treats patients because of the short period between the time when an IHS facility contracts for physician services and the time when a physician reports to the facility. As a result, a temporary physician can treat patients and leave a facility before a complete credentials review has been performed. An incomplete credentialing process can result in a health care facility unknowingly allowing an incompetent physician to provide medical care to patients, thereby placing the facility and patients at risk. The short time frames were evident in the 43 contracts we reviewed; 37 were awarded less than 2 weeks before the facility acquired physician services, not enough time for a facility to confirm credentials information before a temporary physician begins work. Furthermore, temporary physicians often perform work and are gone before the credentials check is completed. The credentials check can take 30 days to complete. The average time from when a contract was awarded to the date services began was 7 days. The length of time IHS facilities needed the services of temporary physicians varied, with many periods of service ranging from 21 to 32 days. In addition, locum tenens companies often use more than one physician to fulfill a contract. For example, a company sent 10 different temporary physicians to staff 1 position for 1 month. When multiple physicians are used to fulfill a contract, credentialing becomes an even more time-consuming and complicated process. An official from one locum tenens company told us that temporary physicians tend to be transient and fall into one of three categories: (1) new physicians who do not know where they want to practice medicine and want to explore different settings before starting a practice, (2) physicians over 40 years old who no longer want to maintain a private practice and want to travel to different locations, and (3) physicians with performance or disciplinary problems who move from place to place to escape detection. Physicians in the latter category are identified primarily through state medical licensing boards although not all performance problems are reported to the boards. At present, IHS facilities do not have a formal mechanism to share information on the performance of temporary physicians who have worked in the IHS system. As a result, a poorly performing physician can move from one IHS location to another with little chance of being detected. The importance of sharing information among IHS facilities is highlighted by the following example. A temporary physician examined a patient in the emergency room. The patient was complaining of chest and abdominal pain that the physician diagnosed as constipation. He prescribed a laxative for the patient and sent him home. The patient returned to the emergency room about an hour later saying his pain had worsened. The temporary physician reexamined him, reaffirmed the diagnosis of constipation, and told him to go home again. However, the emergency room nursing supervisor noticed from the patient’s medical chart that he had a history of heart disease and that his condition had deteriorated since his first visit. Therefore, she ordered an electrocardiogram be performed on the patient and notified the full-time IHS internal medicine physician of the patient’s condition. The IHS staff physician ordered that the patient be admitted to the intensive care unit to determine whether the patient was having a heart attack. The nurse admitted the patient immediately, but he died of a cardiac arrest 15 minutes after being admitted. The IHS facility’s chief of the emergency department deemed that the care this physician provided was unacceptable and informed the locum tenens company of his performance problems. The company removed the physician from its active list of applicants. However, the IHS facility did not inform other facilities of this individual’s performance. As a result, the physician could find work at another IHS facility under contract with a different locum tenens company. American Indian and Alaskan Native patients should have reasonable assurance that every physician who treats them in an IHS facility is qualified to do so. Thus, except for emergencies, we do not believe that IHS should allow physicians to work within the IHS system until a complete examination of all medical licenses has been performed and IHS service unit officials are informed of the results. Furthermore, locum tenens companies under contract with IHS need to be required to provide all information they have available to them on a temporary physician that could potentially adversely affect the care provided to patients. Current IHS policy does not explicitly require that all medical licenses be verified. However, a review of all medical licenses can reduce the risk of patients receiving substandard care from temporary physicians who may have had prior performance problems. IHS facilities can benefit from sharing information about the performance of temporary physicians. Better communication among facilities is needed to identify and track temporary physicians’ performance, both good and bad, while working with IHS. Such an information sharing network would be of substantial benefit to IHS personnel responsible for conducting credentialing checks and could reduce duplicative credentialing checks. We recommend that the Assistant Secretary for Health, Public Health Service, ensure that the Director of IHS take the following actions: Revise IHS’ credentials and privileges policy to explicitly state that the status of all state medical licenses, both active and inactive, be verified. Develop standard provisions to include in contracts with locum tenens companies that require a company to verify and inform IHS of the status of all state medical licenses, both active and inactive. Establish a system that will facilitate the dissemination of information among IHS facilities on the performance of temporary physicians who provide services in IHS. In commenting on a draft of this report, the U.S. Public Health Service agreed with our findings and recommendations. Its response is reprinted in appendix VI. The Public Health Service stated that IHS plans to revise its policy on personal services contracts to make it consistent with its policy guidance on the credentials and privileges review process of medical staff. This revision will require the verification of all medical licenses, both active and inactive, for all physicians—including temporary physicians whether hired directly by IHS or through locum tenens companies. The policy guidance on personal services contracts will also be revised to require locum tenens companies to verify and inform IHS of adverse actions taken on all medical licenses. In addition, IHS is developing an electronic bulletin board to share personnel information among area offices and services units. The bulletin board will include a component on credentialing activities, such as performance information on temporary physicians. The Public Health Service also pointed out that in verifying state medical licenses, many states will not release information on matters under investigation. While this may be true in general, many state medical licensing boards will disclose whether an investigation is being conducted on a particular physician. If state boards are queried, the clinical director of the IHS facility can be alerted that a problem may exist and that follow-up with the physician in question may be warranted. IHS facilities cannot meet all of the health care needs of American Indians and Alaskan Natives. Recognizing this, the Congress annually appropriates funds for care to be administered by non-IHS providers under contract with IHS. But the funds cover only 75 percent of the need for these services. Because of the limited funds, IHS prioritizes the care that it will pay for. The result is reduced access to contract medical services for American Indians and Alaskan Natives. In fiscal year 1993, IHS denied or deferred 82,675 requests for contract medical services. IHS is now implementing staff reductions as required by the Federal Workforce Restructuring Act of 1994. An official in IHS’ Office of Administration and Management does not believe that these reductions will significantly affect either the delivery of medical services or planned expansion programs in fiscal years 1995 and 1996 if IHS’ appropriation for fiscal year 1996 is not reduced and medical services can be purchased through contracts with health care providers. However, he is concerned about how scheduled staff reductions after fiscal year 1996 may affect IHS’ delivery of medical services and its expansion program. Few IHS service units are able to provide a full range of medical services to American Indians and Alaskan Natives. Thus, IHS utilizes non-IHS providers to deliver services that cannot be provided in-house. This is done with contract health services funds. For example, only 4 of IHS’ 144 service units have hospitals that are equipped and staffed to provide comprehensive medical services such as intensive care, inpatient surgery, high-risk obstetrics, and specialty medical services such as ophthalmology (see apps. III and IV for sizes of hospitals). Forty-five service units have inpatient hospitals that do not provide a full range of medical services, such as inpatient surgical services and obstetrical deliveries. Eighty-four service units have no inpatient IHS hospital and provide services at outpatient facilities. And 11 service units have no IHS medical facilities at all. IHS distributes contract health services funds among its 12 area offices based primarily on the level of funding that the area received in previous years. This system of allocating funds does not take into account current data on the number of American Indians and Alaskan Natives in each area who rely on IHS for health care services, the health care needs of the population, or the health care services available within each area. In fiscal years 1991 and 1992, appropriations for contract health services increased about 6 percent each year. However, an IHS official told us that the cost for contract health services rose over 11 percent from 1991 to 1992. Furthermore, according to IHS, the total funds available for contract health services covered only 75 percent of the need for this type of service. Table 3.1 shows the amount of contract health services funds available to area offices and the eligible population of each area for fiscal year 1993. IHS has developed medical priorities guidelines that are used by all facilities to determine what care will receive the highest priority for available contract health services funds (see app. V). Emergent and urgent care—such as emergency room care, life-threatening injuries, obstetrical deliveries, and neonatal care—is given the highest priority for funding and is generally funded. However, other care is given a lesser priority and is not always funded. Preventive care, such as screening mammograms, is next on the priority list. Third on the priority list are primary and secondary care services, such as specialty consultations in pediatrics and orthopedics. The lowest priority is for chronic tertiary and extended care services, such as skilled nursing facility and rehabilitation care. Using medical priority guidelines, IHS service units denied or deferred 82,675 requests for contract health services in fiscal year 1993. This represents a 76 percent increase over denials and deferrals reported in fiscal year 1990. A request for funding is denied when the patient’s care does not fall within the medical priorities for which funds are available and the patient informs the contract health services staff that he or she intends to obtain medical care regardless of whether IHS will pay for it. If the medical care does not fall within medical priorities and a patient is willing to wait until funding may become available, the care is deferred. Of the 70,540 requests that were deferred, 43 percent were for preventive care, such as eye examinations. The remaining deferrals were for acute and chronic primary, secondary, and tertiary care, such as coronary bypass surgery and hip replacement surgery. Some of the patients whose initial requests were deferred may have ultimately received care from IHS or others, but IHS does not have data readily available on the extent to which this has occurred. The following is an example of a case where the patient requested funding for medical care from the contract health services program, but had her request deferred because her condition was not of a sufficiently high priority to receive immediate funding. As a result, care was delayed for 6 months until the patient’s condition deteriorated to the point where the problem was critical and immediate care was required. The 73-year-old woman was diagnosed with severe circulatory problems in her left leg in January 1993 at an IHS hospital. The physician assistant who saw the patient thought she should be referred to a vascular surgeon in the community for surgical treatment. The physician assistant did not believe that the patient was in immediate danger, that is, was not in danger of losing her leg within 48 hours. However, he did believe that care was needed to prevent further deterioration. This case was presented to the hospital’s contract health services committee on January 25, 1993, to determine whether her care was a high enough priority to be funded. Contract health services staff deferred her care because the funds available only allowed the service unit to treat the more seriously ill patients with more urgent medical conditions than hers. Although the woman was covered by Medicare, she could not afford to pay the $338 that Medicare would not cover. Had she been able to pay the $338, she could have received immediate care from a non-IHS provider. Once a month for the next 6 months, the patient returned to the IHS hospital clinic for care. After each visit, her case was referred to the contract health services committee and her care was deferred each time because it did not fall within medical priorities. In July 1993, the patient’s referral was approved by contract health services because her condition had deteriorated to such an extent that she was in immediate danger of losing her left leg. IHS contract health services funds then paid the costs not covered by Medicare that the patient could not afford to pay. Table 3.2 shows the number of cases that were denied and deferred in fiscal year 1993 by area office. IHS officials stated that the number of deferrals and denials only document part of the unmet need. Deferrals and denials only track those who have requested services. There is no way to track the number of American Indians and Alaskan Natives who do not use the IHS system because they know that their care will be deferred. The Navajo and Oklahoma areas accounted for 69 percent of the total denials and deferrals in fiscal year 1993. These areas have 13 IHS hospitals ranging in size from 11 to 107 beds and 49 outpatient facilities that provide medical services to approximately 488,395 American Indians. This represents about 41 percent of all American Indians and Alaskan Natives who have used IHS services within the last 3 years. The hospitals and outpatient facilities in these areas do not have the staff or equipment to provide all of the health care services needed. As a result, contract health services funds are being relied upon to provide care that IHS does not have the capacity to provide. But if the care needed is not a high priority, it does not get funded. For example, in fiscal year 1993 in the Navajo area, 16,503 requests for eye examinations or eyeglasses were not funded because of insufficient contract health services funds. Officials in both area offices told the Public Health Service that they need more funds to meet the needs of their populations. IHS has requested increased funding for the contract health services program, but HHS has not approved the level of increases that IHS has requested. Furthermore, the dollars available for health services to all areas are limited and any increase in funds to one IHS area would likely result in a decrease in funds to another IHS area. The Federal Workforce Restructuring Act of 1994 requires executive agencies to reduce staff. In a September 1994 meeting with the Office of Management and Budget (OMB), the Secretary of HHS requested a waiver of this requirement for IHS. The Secretary stated that IHS needed time to plan and implement a restructuring program that would consolidate some of IHS’ area offices to reduce IHS’ workforce without drastically affecting delivery of health services. OMB did not approve the waiver but did agree to give IHS time to implement staff reductions in a way to minimize the impact on IHS’ delivery of medical services and its planned expansion program. IHS has 15,425 staff for fiscal year 1995. Beginning in fiscal year 1996, this number will decrease annually until a staffing level of 14,083 is reached in fiscal year 1999. An official in IHS’ Office of Administration and Management told us that when supplemented by contract physicians, IHS’ staffing levels in fiscal years 1995 and 1996 will be adequate to meet the staffing requirements of both its current health facilities and those that are scheduled to open in these years. However, in his opinion, if IHS’ fiscal year 1996 appropriation is reduced, the agency will not be able to adequately staff its present facilities and the new facilities scheduled to open in fiscal years 1995 and 1996. If the agency is not able to adequately staff its new facilities, it will be unable to provide services such as physical therapy, respiratory therapy, radiology, optometry, and community health services, according to IHS officials. These services will have to be sought from non-IHS providers in the community with contract health services funds. As a result, more medical services could be denied and deferred. This official also told us that he is concerned that the staffing reductions in fiscal year 1997 and beyond could affect IHS’ delivery of medical services and its planned expansion program. In fiscal year 1997, IHS plans to open and staff a large medical center in Anchorage, Alaska, to replace its old hospital. Additionally, IHS must staff new or expanded services in eight other facilities. | Pursuant to a congressional request, GAO provided information on American Indians' access to quality health care services, focusing on the: (1) Indian Health Service's (IHS) efforts to ensure that temporary physicians working in IHS facilities are qualified and competent to perform assigned duties; and (2) extent that medical services are delayed under the IHS Contract Health Services Program. GAO found that: (1) IHS patients may be receiving substandard care because IHS is not always aware of temporary physicians who have had performance or disciplinary problems; (2) although IHS requires that temporary physicians possess a current medical license without restrictions, it fails to verify all of the physicians' current or prior licenses; (3) most IHS facilities have contracts with companies that are not required to inform IHS of the status of their physicians' licenses; (4) IHS facilities do not possess a network to share information on the performance of its temporary physicians; (5) although IHS can purchase specialized medical services from non-IHS providers under the Contract Health Services program, preventive care is not always funded; and (6) IHS is implementing legislatively required staff reductions, however, officials are unsure of how these reductions will impact future medical services or expansion programs if appropriations are reduced as well. |
Information security is a critical consideration for any organization that depends on information systems and computer networks to carry out its mission or business and is especially important for government agencies, where maintaining the public’s trust is essential. While the dramatic expansion in computer interconnectivity and the rapid increase in the use of the Internet have enabled agencies such as SEC to better accomplish their missions and provide information to the public, the changes also expose federal networks and systems to various threats. For example, the Federal Bureau of Investigation has identified multiple sources of cyber threats, including foreign nation states engaged in information warfare, domestic criminals, hackers and virus writers, and disgruntled employees working within an organization. Concerns about these threats are well founded for a number of reasons, including the dramatic increase in reports of security incidents, the ease of obtaining and using hacking tools, and steady advances in the sophistication and effectiveness of attack technology. For example, the number of incidents reported by federal agencies to the United States Computer Emergency Readiness Team (US- CERT), has increased dramatically over the past 3 years, increasing from 3,634 incidents reported in fiscal year 2005 to 13,029 incidents in fiscal year 2007 (a 259 percent increase). Without proper safeguards, systems are vulnerable to individuals and groups with malicious intent who can intrude and use their access to obtain or manipulate sensitive information, commit fraud, disrupt operations, or launch attacks against other computer systems and networks. Our previous reports and reports by federal inspectors general describe persistent information security weaknesses that place federal agencies at risk of disruption, fraud, or inappropriate disclosure of sensitive information. Accordingly, we have designated information security as a governmentwide high-risk area since 1997, a designation that remains in force today. Recognizing the importance of securing federal agencies’ information systems, Congress enacted the Federal Information Security Management Act (FISMA) in December 2002 to strengthen the security of information and systems within federal agencies. FISMA requires each agency to develop, document, and implement an agencywide information security program to provide information security for the information and systems that support the operations and assets of the agency, using a risk- based approach to information security management. Following the stock market crash of 1929, Congress passed the Securities Exchange Act of 1934, establishing SEC to enforce securities laws, regulate the securities markets, and protect investors. To carry out its responsibilities and help ensure that securities markets are fair and honest, SEC issues rules and regulations that promote adequate and effective disclosure of information to the investing public. The commission also oversees the registration of other key participants in the securities industry, including stock exchanges, broker-dealers, clearing agencies, depositories, transfer agents, investment companies, and public utility holding companies. SEC is an independent, quasi-judicial agency that operates at the direction of five commissioners appointed by the President and confirmed by the Senate. In fiscal year 2008, SEC received a budget authority of $906 million and had a staff of 3,511 employees. In addition, the commission collected $569,000 in filing fees and about $434 million in penalties and disgorgements. To support its financial operations and store the sensitive information it collects, SEC relies extensively on computerized systems interconnected by local and wide-area networks. For example, to process and track financial transactions, such as filing fees paid by corporations, disgorgements and penalties from enforcement activities, and procurement activities, SEC relies on several enterprise database applications—Momentum; Phoenix; Electronic Data Gathering, Analysis, and Retrieval (EDGAR); and Fee Momentum—and a general support system network that allows users to communicate with the database applications. The database applications provide SEC with the following capabilities: Momentum is used to record the commission’s accounting transactions, to maintain its general ledger, and to maintain some of the information SEC uses to produce financial reports. Phoenix contains and processes sensitive data relating to penalties, disgorgements, and restitution on proven and alleged violations of securities and futures laws. EDGAR performs automated collection, validation, indexing, acceptance, and forwarding of submissions by companies and others that are required to file certain information with SEC. Its primary purpose is to increase the efficiency and fairness of the securities market for the benefit of investors, corporations, and the economy by accelerating the receipt, acceptance, dissemination, and analysis of time-sensitive corporate information filed with the agency. The general support system is an integrated client-server system composed of local- and wide-area networks and is organized into distinct subsystems based along SEC’s organizational and functional lines. The general support system provides services to internal and external customers who use them for their business applications. It also provides the necessary security services to support these applications. Under FISMA, the Chairman of SEC has responsibility for, among other things, (1) providing information security protections commensurate with the risk and magnitude of the harm resulting from unauthorized access, use, disclosure, disruption, modification, or destruction of the agency’s information systems and information; (2) ensuring that senior agency officials provide information security for the information and information systems that support the operations and assets under their control; and (3) delegating to the agency chief information officer (CIO) the authority to ensure compliance with the requirements imposed on the agency. FISMA requires the CIO to designate a senior agency information security officer who shall carry out the CIO’s information security responsibilities. SEC has corrected or mitigated 18 of the 34 security control weaknesses that we had reported as unresolved at the time of our prior audit report in 2008. For example, it has adequately validated electronic certificates from connections to its physically secured the perimeter of the operations center, monitored unusual and suspicious activities at its operations center, and removed network system accounts and data center access rights from separating employees. In addition, SEC has made progress in improving its information security program. For example, the commission has developed, documented, and implemented a policy on remedial action plans to help ensure that deficiencies are mitigated in an effective and timely manner, and provided individuals with training for incident handling. These efforts constitute an important step toward strengthening the agencywide information security program mandated by FISMA. While SEC has made important progress in strengthening its information security controls, it has not completed actions to correct or mitigate 16 of the previously reported weaknesses. For example, SEC has not adequately documented access privileges for the EDGAR application, always implemented patches on vulnerable workstations and enterprise database servers, or always sufficiently protected passwords. Failure to resolve these issues could leave sensitive data vulnerable to unauthorized disclosure, modification, or destruction. In addition to the 16 previously reported weakness that remain uncorrected, we identified 23 new weaknesses in controls intended to restrict access to data and systems, as well as weaknesses in other information security controls, that continue to jeopardize the confidentiality, integrity, and availability of SEC’s financial and sensitive information and information systems. Previously reported and newly identified weaknesses hinder the commission’s ability to perform vital functions and increase the risk of unauthorized disclosure, modification, or destruction of financial information. A key reason for these weaknesses was that SEC did not fully implement key activities of its information security program. A basic management objective for any organization is to protect the resources that support its critical operations and assets from unauthorized access. Organizations accomplish this by designing and implementing controls that are intended to prevent, limit, and detect unauthorized access to computer resources (e.g., data, programs, equipment, and facilities), thereby protecting them from unauthorized disclosure, modification, and loss. Specific access controls include identification and authentication, authorization, cryptography, audit and monitoring, and physical security. Without adequate access controls, unauthorized individuals, including outside intruders and former employees, can surreptitiously read and copy sensitive data and make undetected changes or deletions for malicious purposes or personal gain. In addition, authorized users can intentionally or unintentionally modify or delete data or execute changes that are outside of their span of authority. A computer system must be able to identify and authenticate the identities of users so that activities on the system can be linked to specific individuals. When an organization assigns unique user accounts to specific users, the system is able to distinguish one user from another—a process called identification. The system must also establish the validity of a user’s claimed identity by requesting some kind of information, such as a password, that is known only by the user—a process known as authentication. Furthermore, SEC policy requires the implementation of automated identification and authentication mechanisms that enable the unique identification of individual users and systems. SEC did not consistently enforce identification and authentication controls for its users and systems. For example, it did not always securely configure the snmp community string (similar to a password) used to monitor and manage network devices; remove the default vendor account for a remote network service, which could allow access to the network service without the need to provide a password; restrict multiple database administrators from sharing the same log-on application ID to a powerful database account; and uniquely identify individual accounts on network switches for https login. As a result, increased risk exists that users will not be uniquely identified before they access the SEC network, and SEC will not be able to hold them accountable in the event of a security incident. Authorization is the process of granting or denying access rights and privileges to a protected resource, such as a network, system, application, function, or file. A key component of granting or denying access rights is the concept of “least privilege.” Least privilege is a basic principle for securing computer resources and data that means that users are granted only those access rights and permissions that they need to perform their official duties. To restrict legitimate users’ access to only those programs and files that they need in order to do their work, organizations establish access rights and permissions. “User rights” are allowable actions that can be assigned to users or to groups of users. File and directory permissions are rules that are associated with a particular file or directory, regulating which users can access it—and the extent of that access. To avoid unintentionally giving users unnecessary access to sensitive files and directories, an organization must give careful consideration to its assignment of rights and permissions. In addition, SEC policy requires that each user or process be assigned only those privileges or functions needed to perform authorized tasks and that approval of such privileges be documented. Furthermore, SEC policy states that only services that are absolutely necessary are allowed to have a remote connection. SEC did not always sufficiently restrict system access and privileges to only those users that needed access to perform their assigned duties. For example, SEC did not always remove excessive user privileges on its financial systems, properly document or maintain approval of user access privileges to the restrict unnecessary remote access to database servers, and limit users’ privileges so that users do not monopolize database system resources during critical times of the day. As a result, increased risk exists that users could gain inappropriate access to computer resources, circumvent security controls, and deliberately or inadvertently read, modify, or delete critical financial information. In addition, SEC’s financial information may not be available when it is needed. Cryptography underlies many of the mechanisms used to enforce the confidentiality and integrity of critical and sensitive information. A basic element of cryptography is encryption. Encryption can be used to provide basic data confidentiality and integrity by transforming plaintext into ciphertext using a special value known as a key and a mathematical process known as an algorithm. The National Security Agency recommends encrypting network services. If encryption is not used, user ID and password combinations are susceptible to electronic eavesdropping by devices on the network when they are transmitted. Although SEC has implemented a network topology that employs extensive switching and limits eavesdropping to only the network segment accessible by the potential eavesdropper, it did not always ensure that information transmitted over the network was adequately encrypted. While the eavesdropping risk on the SEC network is reduced by its topology, nonetheless, increased risk exists that individuals could capture user IDs and passwords and use them to gain unauthorized access to network devices. To establish individual accountability, monitor compliance with security policies, and investigate security violations, it is crucial to determine what, when, and by whom specific actions have been taken on a system. Organizations accomplish this by implementing system or security software that provides an audit trail for determining the source of a transaction or attempted transaction and monitoring users’ activities. To be effective, organizations should (1) configure the software to collect and maintain a sufficient audit trail for security-relevant events; (2) generate reports that selectively identify unauthorized, unusual, and sensitive access activity; and (3) regularly monitor and take action on these reports. SEC also requires the enforcement of auditing and accountability by configuring information systems to produce, store, and retain audit records of system, application, network, and user activity. SEC did not adequately configure several database systems to enable auditing and monitoring of security-relevant events. For example, it did not configure one database to record successful log-ons or security violations for unauthorized modification of data, and three databases to safeguard log data against loss. As a result, there is increased likelihood that unauthorized activities or policy violations would not be detected. Physical security controls are important for protecting computer facilities and resources from espionage, sabotage, damage, and theft. These controls involve restricting physical access to computer resources, usually by limiting access to the buildings and rooms in which the resources are housed, and periodically reviewing access rights granted to ensure that access continues to be appropriate based on criteria established for granting it. At SEC, physical access control measures (such as guards, badges, and locks, used either alone or in combination) are vital to protecting its computing resources and the sensitive data it processes from external and internal threats. Although SEC has strengthened its physical security controls, certain weaknesses reduced its effectiveness in protecting and controlling physical access to sensitive work areas. For example, on multiple occasions SEC employees entered electronically secured interior spaces by following another employee through an open door instead of using their badges to obtain access. In addition, physical security standards have been drafted but have not been approved by management. As a result, increased risk exists that unauthorized individuals could gain access to sensitive computing resources and data and inadvertently or deliberately misuse or destroy them. In addition to having access controls, an organization should have policies, procedures, and control techniques in place to appropriately segregate computer-related duties. Segregation of duties refers to the policies, procedures, and organizational structure that help ensure that one individual cannot independently control all key aspects of a process or computer-related operation and thereby gain unauthorized access to assets or records. Often segregation of incompatible duties is achieved by dividing responsibilities among two or more organizational groups. Dividing duties among two or more individuals or groups diminishes the likelihood that errors and wrongful acts will go undetected because the activities of one individual or group will serve as a check on the activities of another. Inadequate segregation of duties increases the risk that erroneous or fraudulent transactions could be processed, improper program changes implemented, and computer resources damaged or destroyed. In addition, SEC policy requires that each user or process be assigned only those privileges or functions needed to perform authorized tasks. SEC did not adequately segregate incompatible computer-related duties and functions. For example, a financial services branch chief could perform multiple incompatible duties such as creating, modifying, and deleting security organizations, roles, and security categories. At the same time, he could perform financial operations such as creating, approving, and changing invoices. These conditions existed, in part, because SEC lacked implementation guidelines for assigning incompatible duties among personnel administering its computer applications environment. In addition, although SEC has logically separated many of its networked devices, it did not always adequately separate network management traffic from general network traffic. As a result, general users could gain inappropriate access and intentionally or inadvertently disrupt network operations. As a consequence, increased risk exists that users could perform unauthorized system activities without detection. Configuration management is another important control that involves the identification and management of security features for all hardware and software components of an information system at a given point and systematically controls changes to that configuration during the system’s life cycle. An effective configuration management process includes procedures for (1) identifying, documenting, and assigning unique identifiers (for example, serial number and name) to a system’s hardware and software parts and subparts, generally referred to as configuration items; (2) evaluating and deciding whether to approve changes to a system’s baseline configuration; (3) documenting and reporting on the status of configuration items as a system evolves; (4) determining alignment between the actual system and the documentation describing it; and (5) developing and implementing a configuration management plan for each system. In addition, establishing controls over the modification of information system components and related documentation helps to prevent unauthorized changes and ensure that only authorized systems and related program modifications are implemented. This is accomplished by instituting policies, procedures, and techniques that help make sure all hardware, software, and firmware programs and program modifications are properly authorized, tested, and approved. SEC has implemented several elements of a configuration management process. Specifically, it has documented policies and procedures for assigning unique identifiers and naming configuration items so that they can be distinguished from one another and for requesting changes to configuration items. SEC has also developed a change request process and an enterprise-level change control board to review changes. However, SEC has not adequately implemented key configuration management controls over the information system components associated with the upgrade to Momentum. Specifically, it did not always document, evaluate, or approve changes to a system’s baseline. For example, it did not consistently document test plans; adequately document or approve changes to the requirements, design, and scripts; establish or maintain configuration baselines; or apply up-to-date patches on its database servers that support processing of financial data. In addition, SEC did not document and report on the status of configuration items as Momentum evolved, nor did it conduct configuration audits to determine the alignment between the actual system and the documentation describing it. Furthermore, SEC did not (1) develop a configuration management plan for Momentum, (2) assign a manager or team to conduct these activities, and (3) use adequate tools to implement the process. As a result, increased risk exists that authorized changes will not be made and unauthorized changes will be made to the Momentum system. SEC has made important progress in implementing its information security program. For example, SEC has provided individuals with training for incident handling and developed, documented, and implemented a policy on remedial action plans to ensure that deficiencies are mitigated in an effective and timely manner. However, a key reason for the information security weaknesses is that it has not effectively or fully implemented key program activities. Until all key elements of its information security program are fully and consistently implemented, SEC will not have sufficient assurance that new weaknesses will not emerge and that financial information and financial assets are adequately safeguarded from inadvertent or deliberate misuse, fraudulent use, improper disclosure, or destruction. FISMA requires the CIO to designate a senior agency information security officer who shall have information security duties as that official’s primary duty and head an office with the mission and resources to assist in ensuring agency compliance with the provisions of the act. This officer will be responsible for carrying out the CIO’s information security responsibilities, including developing and maintaining a departmentwide information security program, developing and maintaining information security policies and procedures, and providing training and oversight to security personnel. However, although SEC appointed an acting senior agency information security officer from April to July 2008, the position has been vacant for the past 8 months. According to an SEC official, a vacancy announcement has not yet been posted for this position. Without a senior security officer to provide direction for an agencywide security focus, SEC is at increased risk that its security program will not be adequate to ensure the security of its highly interconnected computer environment. FISMA and its implementing policies require agencies to develop, document, and implement periodic assessments of the risk and magnitude of harm that could result from the unauthorized access, use, disclosure, disruption, modification, or destruction of information or information systems. The National Institute of Standards and Technology (NIST) also states that a risk assessment report should be presented as a systematic and analytical approach to assessing risk so that senior management will understand the risks and allocate resources to reduce and correct potential losses. SEC policy states that security risk assessment involves the identification and evaluation of IT security risks. This process identifies IT security-related risks to information and information systems, considers the probability of occurrence, and measures their potential impact. The SEC Office of IT Security Group is responsible for periodically reviewing the risk assessments to ensure that all aspects of risk and applicable IT security requirements have been adequately addressed. SEC did not provide full information for management oversight of risks associated with the Momentum application. For example, the SEC security testing and evaluation for Momentum identified numerous configuration management vulnerabilities that affect other areas such as access controls, separation of duties, and inappropriate administrative roles assigned to individuals. Several of these vulnerabilities in the security testing and evaluation were not reported in the risk assessment summary for the Momentum application for management attention. As a result, SEC management may not be fully aware of all risks or the magnitude of harm that could result from the unauthorized access, use, disclosure, disruption, modification, or destruction of information and information systems that support their operations and assets. FISMA and its implementing policies require periodic testing and evaluation of the effectiveness of information security policies, procedures, and practices performed with a frequency depending on risk, but no less than annually; this should include testing of management, operational, and technical controls for every system identified in the agency’s required inventory of major information systems. This type of oversight is a fundamental element of a security program because it demonstrates management’s commitment to the program, reminds employees of their roles and responsibilities, and identifies areas of noncompliance and ineffectiveness. Analyzing the results of security reviews provides security specialists and business managers with a means of identifying new problem areas, reassessing the appropriateness of existing controls, and identifying the need for new controls. FISMA requires that the frequency of tests and evaluations be based on risks and occur no less than annually. However, SEC did not sufficiently conduct periodic testing and evaluation of controls. For example, SEC did not test and evaluate the effectiveness of security controls for the general support system supporting Momentum and EDGAR in fiscal year 2008. In addition, the scope and depth of security testing and evaluation that were performed were not comprehensive and often did not identify control weaknesses. To illustrate, SEC did not test or assess the effectiveness of a key subsystem used to develop financial statements, and an independent contractor tested only 4 of 65 security roles in Momentum, severely limiting the scope of the testing. In addition, control tests conducted by SEC on Momentum did not identify vulnerabilities in the following controls: (1) configuration management, (2) separation of duties, (3) audit and monitoring, and (4) access controls; in contrast our tests identified vulnerabilities in these controls. As a result, there is heightened risk that SEC cannot be assured that Momentum and EDGAR meet requirements and perform as intended. According to NIST, security certification and accreditation of information systems and subsystems are important activities that support a risk management process and are an integral part of an agency’s information security program. Security certification consists of conducting a security control assessment and developing the security documents. Security accreditation is the official management decision given by a senior agency official to authorize the operation of an information system and to explicitly accept the risk it may present to agency operations, agency assets, or individuals based on the implementation of an agreed-upon set of security controls. Required by Office of Management and Budget (OMB) Circular A-130, appendix III, security accreditation provides a form of quality control and challenges managers and technical staffs at all levels to implement the most effective security controls possible on an information system, given mission requirements and technical, operational, and cost/schedule constraints. After certification, a security accreditation package with security documents is provided to the authorizing official with the essential information for the official to make a credible, risk- based decision on whether to authorize operation of the information system. The security accreditation package includes the security plan, risk assessment, contingency plan, security assessment report, and plan of action and milestones. SEC did not certify and accredit a key intermediary subsystem that supports the production of its financial statements. In preparing its financial statements, SEC regularly used this intermediary subsystem to process transactions before loading the financial data into the Momentum application. The subsystem encompassed (1) an application tool to handle transactions of disgorgement data between the Phoenix and Momentum applications; (2) spreadsheets to record, calculate, maintain, and report financial transactions from various accounts; and (3) a third-party tool used for manipulating, sorting, and merging financial data. SEC did not certify or accredit the subsystem or include it as part of the security certification and accreditation process for Phoenix and Momentum. For example, the subsystem was not described in a security plan, risk assessment, contingency plan, security assessment report, or plan of action and milestone. Without certification and accreditation of the intermediate subsystem, possible security weaknesses may go undetected and management may not be alerted to potential vulnerabilities. SEC has made progress in correcting or mitigating previously reported weaknesses. However, information security weaknesses—both old and new—continue to impair the agency’s ability to ensure the confidentiality, integrity, and availability of financial and sensitive information. These weaknesses represent a significant deficiency in internal controls over the information systems and data used for financial reporting. A key reason for these weaknesses is that the agency has not yet fully implemented critical elements of its agencywide information security program. Until SEC (1) mitigates known information security weaknesses in access controls and other information system controls and (2) fully implements a comprehensive agencywide information security program that includes filling the security officer position, adequately reporting risks, conducting effective system security tests, and certifying and accrediting an intermediary subsystem, its financial information will remain at increased risk of unauthorized disclosure, modification, or destruction, and its management decisions may be based on unreliable or inaccurate information. To assist the commission in improving the implementation of its agencywide information security program, we recommend that the SEC Chairman direct the CIO to take the following four actions: designate a senior agency information security officer who will be responsible for managing SEC’s information security program, provide full information for management oversight of information security conduct comprehensive periodic testing and evaluation of the effectiveness of security controls for the general support system and key financial applications, and certify and accredit subsystems that support the production of SEC’s financial statements. In a separate report with limited distribution, we are also making 32 recommendations to enhance SEC’s access controls and configuration management practices. In providing written comments on a draft of this report, the SEC Chairman agreed with our recommendations and reported that the agency is on track to address our new findings and to complete remediation of prior year findings. She stated that strong internal controls are one of SEC’s highest priorities and that it is committed to proper stewardship of the information entrusted to it by the public. The Chairman’s written comments are reprinted in appendix II. We are sending copies of this report to the Chairmen and Ranking Members of the Senate Committee on Banking, Housing, and Urban Affairs; the Senate Committee on Homeland Security and Governmental Affairs; the House Committee on Financial Services; and the House Committee on Oversight and Government Reform. We are also sending copies to the Secretary of the Treasury, the Director of the Office of Management and Budget, and other interested parties. In addition, this report will be available at no charge on our Web site at http://www.gao.gov. If you have any questions about this report, please contact Gregory C. Wilshusen at (202) 512-6244 or Dr. Nabajyoti Barkakati at (202) 512-4499. We can also be reached by e-mail at [email protected] or [email protected]. Contacts for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. Individuals who made key contributions to this report are listed in appendix III. The objectives of our review were (1) to determine the status of the Securities and Exchange Commission’s (SEC) actions to correct or mitigate previously reported information security weaknesses and (2) to determine whether controls over key financial systems were effective in ensuring the confidentiality, integrity, and availability of financial and sensitive information. This review was performed for the purpose of supporting the opinion developed during our audit of SEC’s internal controls over the preparation of its 2008 financial statements. To determine the status of SEC’s actions to correct or mitigate previously reported information security weaknesses, we identified and reviewed its information security policies, procedures, practices, and guidance. We reviewed prior GAO reports to identify previously reported weaknesses and examined the commission’s corrective action plans to determine which weaknesses it had reported were corrected. For those instances where SEC reported that it had completed corrective actions, we assessed the effectiveness of those actions by reviewing the appropriate documents and interviewing the appropriate officials. To determine whether controls over key financial systems were effective, we tested the effectiveness of selected information security controls. We concentrated our evaluation primarily on the controls for financial applications, enterprise database applications, and network infrastructure—Momentum; Phoenix; Electronic Data Gathering, Analysis, and Retrieval (EDGAR); Fee Momentum; and the general support system—that directly or indirectly support the processing of material transactions reflected in the agency’s financial statements. Our evaluation was based on our Federal Information System Controls Audit Manual, which contains guidance for reviewing information system controls that affect the confidentiality, integrity, and availability of computerized information. Using National Institute of Standards and Technology (NIST) standards and guidance and SEC’s policies, procedures, practices, and standards, we evaluated controls by testing the complexity and expiration of password settings on selected servers to determine if strong password management was enforced; analyzing users’ system authorizations to determine whether users had more permissions than necessary to perform their assigned functions; observing methods for providing secure data transmissions across the network to determine whether sensitive data were being encrypted; observing whether system security software was logging successful testing and observing physical access controls to determine if computer facilities and resources were being protected from espionage, sabotage, damage, and theft; inspecting key servers and workstations to determine whether critical patches had been installed or were up to date; examining access privileges to determine whether incompatible functions were segregated among different individuals; and observing end user activity pertaining to the process of preparing SEC financial statements. Using the requirements identified by the Federal Information Security Management Act (FISMA), the Office of Management and Budget (OMB), and NIST, we evaluated SEC’s implementation of its security program by reviewing SEC’s risk assessment process and risk assessments for three key systems that support the preparation of financial statements to determine whether risks and threats were documented consistent with federal guidance; analyzing SEC’s policies, procedures, practices, and standards to determine their effectiveness in providing guidance to personnel responsible for securing information and information systems; analyzing security plans to determine if management, operational, and technical controls were in place or planned and that security plans were updated; examining training records for personnel with significant security responsibilities to determine if they received training commensurate with those responsibilities; analyzing security testing and evaluation results for three key systems to determine whether management, operational, and technical controls were tested at least annually and based on risk; examining remedial action plans to determine whether they addressed vulnerabilities identified in security testing and evaluations; and examining contingency plans for three key systems to determine whether those plans had been tested or updated. We also discussed, with key security representatives and management officials, whether information security controls were in place, adequately designed, and operating effectively. We conducted this audit from July 2008 to March 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contacts named above, David B. Hayes and William F. Wadsworth (Assistant Directors), Angela M. Bell, Mark J. Canter, Kirk J. Daubenspeck, Patrick R. Dugan, Mickie E. Gray, Sharon S. Kitrell, Lee A. McCracken, Stephanie Santoso, Duc M. Ngo, Tammi L. Nguyen, Henry I. Sutanto, Edward R. Tekeley and Jayne L. Wilson made key contributions to this report. | In carrying out its mission to ensure that securities markets are fair, orderly, and efficiently maintained, the Securities and Exchange Commission (SEC) relies extensively on computerized systems. Effective information security controls are essential to ensure that SEC's financial and sensitive information is protected from inadvertent or deliberate misuse, disclosure, or destruction. As part of its audit of SEC's financial statements, GAO assessed (1) the status of SEC's actions to correct previously reported information security weaknesses and (2) the effectiveness of SEC's controls for ensuring the confidentiality, integrity, and availability of its information systems and information. To do this, GAO examined security policies and artifacts, interviewed pertinent officials, and conducted tests and observations of controls in operation. SEC has made important progress toward correcting previously reported information security control weaknesses. Specifically, it has corrected or mitigated 18 of 34 weaknesses previously reported as unresolved at the time of our prior audit. For example, SEC has adequately validated electronic certificates from connections to its network, physically secured the perimeter of its operations center and put in place a process to monitor unusual and suspicious activities, and removed network system accounts and data center access rights from separating employees. In addition, the commission has made progress in improving its information security program. To illustrate, it has developed, documented, and implemented a policy on remedial action plans to ensure that deficiencies are mitigated in an effective and timely manner, and provided individuals with training for incident handling. Nevertheless, SEC has not completed actions to correct 16 previously reported weaknesses. For example, it did not adequately document access privileges granted to users of a key financial application, and did not always implement patches on vulnerable workstations and enterprise database servers. In addition to the 16 previously reported weakness that remain uncorrected, GAO identified 23 new weaknesses in controls intended to restrict access to data and systems, as well as weaknesses in other information security controls, that continue to jeopardize the confidentiality, integrity, and availability of SEC's financial and sensitive information and information systems. The commission has not fully implemented effective controls to prevent, limit, or detect unauthorized access to computing resources. For example, it did not always (1) consistently enforce strong controls for identifying and authenticating users, (2) sufficiently restrict user access to systems (3) encrypt network services, (4) audit and monitor security-relevant events for its databases, and (5) physically protect its computer resources. SEC also did not consistently ensure appropriate segregation of incompatible duties or adequately manage the configuration of its financial information systems. A key reason for these weaknesses is that the commission has not yet fully implemented its information security program to ensure that controls are appropriately designed and operating as intended. Specifically, SEC has not effectively or fully implemented key program activities. For example, it has not (1) filled the vacancy for a senior agency information security officer, (2) fully reported or assessed risks, (3) sufficiently tested and evaluated the effectiveness of its information system controls, and (4) certified and accredited a key intermediary subsystem. Although progress has been made, significant and preventable information security control deficiencies create continuing risks of the misuse of federal assets, unauthorized modification or destruction of financial information, inappropriate disclosure of other sensitive information, and disruption of critical operations. |
Assistants-at-surgery, who serve as members of surgical teams, perform tasks under the direction of surgeons and aid them in conducting operations. These tasks may include making initial incisions (“opening”), exposing the surgical site (“retracting”), stemming blood flow (“hemostasis”), surgically removing veins and arteries to be used as bypass grafts (“harvesting”), reconnecting tissue (“suturing”), and completing the operation and reconnecting external tissue (“closing”). Some of these tasks, like retraction, are relatively simple, while others, such as harvesting, are more complex. An assistant-at-surgery may perform one or more simple or complex tasks during an operation. Tasks performed by others on the surgical team differ from those performed by assistants-at-surgery. Scrub staff work within the sterile field—the area within the operating room that is kept free from harmful microorganisms—passing instruments, sponges, and other items directly to the surgeon and assistant-at-surgery who work within the sterile field. Circulators work outside the sterile field, responding to the needs of team members within the sterile field. Anesthesiologists, or anesthetists, who administer and monitor anesthesia, painkillers, and other drugs, are also present during an operation. Decisions by a hospital or surgeon to use an assistant-at-surgery depend on the complexity of the operation and medical condition of the patient. Physician associations, such as the ACS and the American Society of General Surgeons, maintain that the surgeon should be responsible for determining if an assistant-at-surgery is needed, although some hospitals require the use of an assistant for certain surgical procedures. Hospitals that employ assistants-at-surgery may assign them to a procedure without consulting the surgeon performing the procedure. Since 1994, the ACS, with other surgical specialty organizations, has conducted studies to determine which surgical procedures require physicians as assistants-at-surgery. These studies classify surgical procedures as “almost always,” “sometimes,” or “almost never” requiring an assistant-at-surgery. The 2002 study classifies approximately 5,000 surgical procedures, about 1,750 of which are designated as “almost always” requiring a physician to serve as an assistant-at-surgery. A small number of surgical procedures have accounted for the majority of the assistant-at-surgery services paid for under the Medicare physician fee schedule: In 2002, 100 procedures accounted for almost 75 percent of the assistant-at-surgery services that Medicare paid under the physician fee schedule. ACS designated 81 of these procedures as “almost always” requiring a physician as an assistant-at-surgery, and the remaining 19 procedures were designated as “sometimes” requiring a physician as an assistant. Medicare pays for medically necessary services, including those performed by assistants-at-surgery, for eligible elderly and disabled patients provided by health professionals and institutions meeting certain requirements. Part A, or Hospital Insurance, pays for inpatient hospital care, care provided by certain other health care facilities, and some home health care. Part B, or Supplementary Medical Insurance, includes payment for the services and items provided by physicians, certain other nonphysician health professionals, suppliers, outpatient hospital departments, and home health care agencies. Medicare makes payments to hospitals under part A through the hospital inpatient PPS for assistants-at-surgery. A fixed payment is made for all the inpatient hospital services, including assistant-at-surgery services, that a hospital provides to a beneficiary with a given diagnosis or receiving a particular type of surgery. Payments under the hospital inpatient PPS reflect the average bundle of services that beneficiaries with a particular diagnosis receive as inpatients in similar hospitals. The hospital’s payment for a bundle of services is the same regardless of whether an assistant-at- surgery is used or who provides the assistant-at-surgery services. Prospective payment systems, such as the hospital inpatient PPS, are designed to promote efficiency: because the payment for a particular bundle of services is almost always the same, regardless of the services a particular patient receives, hospitals are discouraged from providing unnecessary services. Providing additional services would not increase their payments. Consequently, PPS payments to the hospital are sometimes less and sometimes more than the cost of providing care. Payments are also made under the hospital inpatient PPS to teaching hospitals for providing GME to the residents employed by the hospital. In 2001, about 20 percent of the approximately 5,800 U.S. hospitals were considered teaching hospitals. In 2003, surgical residents comprised about 20 percent of all residents at these hospitals. There were about 7,500 residents in general surgery and about 13,000 more surgical residents training for specialties, such as orthopedics, all of whom were required to serve as assistants-at-surgery as part of their training. In addition to these surgical residents, some nonsurgical residents have surgical rotations during which they serve as assistants-at-surgery. Medicare makes part B payments to assistants-at-surgery under the physician fee schedule when assistant services are performed by a physician or by a nonphysician health professional authorized to receive such payment. In 2002, these payments totaled about $158 million, less than 2 percent of the $10.5 billion Medicare paid to surgeons for surgical procedures that year. Medicare also makes global payments to surgeons under the physician fee schedule that cover the surgery and some pre- and postoperative services that the surgeons and their employees perform. Assistant-at-surgery services are not included in this bundle of services. Generally, the amount Medicare pays under the physician fee schedule is based on the resources needed to perform a service: the physician’s time and skill, practice expenses that include the costs of staff, equipment, and supplies, and the cost of liability insurance. While a surgeon’s global fee for a surgical procedure is set to reflect the resources required to perform the service, payments under the physician fee schedule for assistant-at- surgery services are not; they are calculated as a fixed percentage of the surgeon’s global fee. The percentage varies depending on the profession of the assistant-at-surgery. The Medicare physician fee schedule pays physicians more than nonphysician health professionals for assistant-at- surgery services (see table 1). Medicare sets requirements that various health care institutions, suppliers, and professionals must meet to be paid by the program. Institutions, such as hospitals, must meet conditions of participation (CoP)—health and safety rules used to ensure quality of care. Until 1986, HCFA specified some requirements for assistant-at-surgery services in its hospital CoP. Hospitals were required to have physicians serve as assistants-at-surgery for procedures “with unusual hazard to life,” while “nurses, aides, or technicians having sufficient training to properly and adequately assist’’ could assist at “lesser operations.” In a broad revision of the hospital CoP in 1986, the agency eliminated these requirements: it said the purpose of the revisions to the surgical services section, which had included the assistant-at-surgery requirements, was to “delete the overly prescriptive details” about the operation of surgical services. CMS retains requirements for other surgical team members, including scrub and circulating staff. CMS also establishes regulatory requirements for the health professions eligible to receive payment under the Medicare physician fee schedule. Members of that profession can be paid for providing covered services, including assistant-at-surgery services. Although CMS’s rules include the minimum requirements that these professionals must meet to receive payment for services, there are no specific requirements to receive assistant-at-surgery payments in Medicare regulations. General requirements include education, licensure, and certification; no surgical education or experience is mandated. For example, physician assistants must graduate from an accredited physician assistant education program, pass the National Commission on Certification of Physician Assistants certification examination, and be licensed to practice as a physician assistant, but do not have to have experience as an assistant-at-surgery. Members of a wide range of health professions serve as assistants-at- surgery. Hospitals employ residents, international medical graduates, and all the types of nonphysician health professionals who perform the role. Hospital employees likely serve as assistants-at-surgery for a majority of the procedures for which the ACS says an assistant is “almost always” necessary. The number of assistant-at-surgery services performed by physicians and paid for under the physician fee schedule has declined, while the number of such services performed by nonphysician health professionals eligible to receive payment under the physician fee schedule has increased. Physicians, residents in training for licensure or board certification in a physician specialty, several different kinds of nurses, and members of several other health professions serve as assistants-at-surgery (see table 2). Surgical associations state that surgeons or residents are preferred as assistants-at-surgery, but surgeons are often not available to assist at surgery. Hospitals employ the gamut of health professionals who serve as assistants-at-surgery to perform the role. Some hospitals tend to hire assistants-at-surgery from a particular health profession, sometimes offering training courses in assistant services for that profession, to ensure that the hospital has a sufficient number of assistants. To encourage surgeons to use their operating rooms, hospitals may (1) employ assistants-at-surgery, eliminating the need for the surgeons to hire their own assistants, or (2) arrange for health professionals in independent practice to serve as assistants. While teaching hospitals use residents as assistants-at-surgery, these hospitals may also hire nonphysician health professionals to perform the role. In a recent survey of neurosurgery residency program directors, nearly all cited the need to hire nonphysician health professional staff, such as physician assistants, in response to the weekly 80-hour work limit for residents. Teaching hospitals with other surgical specialty programs may also need to hire nonphysician health professionals as assistants-at- surgery because of the limit on resident hours. Because hospitals are not required to keep records on the use of assistants-at-surgery to receive Medicare payment under the inpatient PPS, the number and cost of such services provided by all hospital employees are unknown. Still, hospital employees likely serve as assistants-at-surgery for the majority of the surgeries performed on Medicare patients. In 2002, Medicare made payments under the physician fee schedule to assistants- at-surgery about 36 percent of the time that the program made payments to surgeons for the surgical procedures that ACS designated in its most recent study as “almost always” requiring an assistant-at-surgery. Since the remaining 64 percent of those surgical procedures were likely to have had assistants-at-surgery, hospital employees would likely have performed this role. In its final regulation revising the physician fee schedule for 2000, HCFA relied upon the results of the American Hospital Association’s (AHA) National Hospital Panel Survey that found that only 11 percent of responding hospitals said it was a regular practice for physicians to bring their own staff to the hospital to serve as assistants-at-surgery or to perform other functions. A representative of the AHA told us that most assistants-at-surgery, including residents and nonphysician staff, are hospital employees. The percentage of assistant-at-surgery services paid to physicians under the physician fee schedule has declined, and the percentage of these services paid to nonphysician health professionals has increased, particularly since enactment of the Balanced Budget Act of 1997 (BBA). The act raised the amount paid for assistant-at-surgery services to these nonphysician health professionals under the physician fee schedule, extended billing by clinical nurse specialists and nurse practitioners to urban areas (such billing had been limited to rural areas), and allowed physician assistants to contract with surgeons to be an assistant without having to be employees of the surgeon. The number of assistant-at- surgery services paid for under the physician fee schedule and provided by nonphysician health professionals increased more than 200 percent from 1997 through 2002, while the number of services provided by physicians serving as assistants declined about 23 percent. During this period, the percentage of Medicare-paid assistant-at-surgery services performed by nonphysician health professionals increased by 25 percentage points (see fig. 1). The amount paid to nonphysicians for these services has also increased. Prior to 1987, nonphysicians could not be paid as assistants-at-surgery. In 1997, nonphysicians were paid only $16 million for assistant-at-surgery services; in 2002, they were paid about $54 million. In comparison, physicians were paid $295 million for assistant-at-surgery services in 1986; $166 million in 1997; and $104 million in 2002. There is no widely accepted set of standards for the education and experience required to serve as an assistant-at-surgery. The health care professions whose members provide assistant-at-surgery services have varying educational requirements. No state licenses all the types of health professionals who serve as assistants-at-surgery. And the licenses they issue typically attest to the completion of broad-based health care education, making them of limited value in determining which health professionals have the education and experience to serve as an assistant- at-surgery. Furthermore, the certification programs developed by the various nonphysician health professional groups whose members assist at surgery differ. We found that there was insufficient information about the quality of care provided by assistants-at-surgery—either generally or by members of specific health professions—to assess the adequacy of the requirements for a particular profession. The health professions whose members serve as assistants-at-surgery have varying educational requirements (see table 3). For example, a licensed practical nurse typically completes a 1-year educational program, while a clinical nurse specialist must have a master’s of science degree in nursing. In some cases, experience can substitute for education: orthopedic physician assistants may have associate degrees or certificates from military or nondegree programs or 5 years of experience working for an orthopedic surgeon. While state licenses for health professionals, including those eligible for payment as assistants-at-surgery under the physician fee schedule, typically have “scopes of practice” that include assistant-at-surgery services, education and experience as an assistant are not necessarily required to obtain a license: the licenses for these health professions attest to the completion of broad-based health care education, which may not include courses in surgery. No state licenses all the health professions whose members assist at surgery in its jurisdiction. For example, orthopedic physician assistants and surgical assistants are licensed in only a few states. Only one state, Texas, has a specific assistant-at-surgery license. Members of different health professions may qualify for this license, which requires surgical education and experience. Nevertheless, a license is not required to serve as an assistant-at-surgery in Texas. Certification programs for assistants-at-surgery generally require completion of a certain level of education or experience and passage of an examination. Each certification program created by a group of nonphysician health professionals for its members who serve as assistants-at-surgery has different requirements (see table 4). Certification programs for some nonphysician health professions not eligible for payment under the physician fee schedule are for a wide range of surgical services; others are specific to a particular type of surgery. For example, a CRNFA, in addition to being licensed as a registered nurse and earning a bachelor’s degree in nursing, must obtain certification as an operating room nurse, complete an approved program, have 2,000 hours of experience as an assistant-at-surgery, and pass an examination. For a surgical technologist to receive certification as an assistant-at-surgery, he/she must have a surgical technologist certification, complete an approved program or have 2 years of experience as an assistant, and pass the examination. Certifications for those who are eligible for payment under the physician fee schedule as an assistant-at-surgery are typically for a broad range of services and are not specifically surgery-related. For example, the American Nurses Credentialing Center awards certifications to nurse practitioners for acute, adult, family, gerontological, pediatric, adult psychiatric and mental health, and family psychiatric and mental health care. While some national physician and accreditation organizations say assistants-at-surgery should have to meet some requirements, there is no consensus about what those requirements should be. For example, ACS has stated that when surgeons or residents are unavailable to serve as assistants-at-surgery, nonphysician health professionals should be allowed to perform the role if they meet the “national standards” for their health profession or have “additional specialized training.” Similarly, the Joint Commission on Accreditation of Healthcare Organizations (JCAHO), a private organization that accredits health care organizations, including hospitals, requires hospitals to credential their staff (i.e., establish requirements, such as licensure, certification, and experience for physicians and certain nonphysician health professionals) and ensure that those requirements are used when personnel decisions are made. But JCAHO does not suggest the type or length of education or experience to be used in credentialing hospital staff who serve as assistants-at-surgery. We found little evidence about the quality of care provided by assistants- at-surgery. Our February 2003 search of relevant literature maintained by the National Library of Medicine found only six articles dealing with the quality of care provided by assistants-at-surgery. None of the articles compares the quality of assistant-at-surgery services provided by one nonphysician health profession with that provided by another nonphysician health profession or physicians, and only one deals specifically with the influence of assistants on surgical outcomes. There are three flaws in Medicare’s policies for paying assistants-at- surgery that prevent the payment system from meeting the program’s goals of making appropriate payment for medically necessary services by qualified providers. First, because Medicare pays for assistant-at-surgery services under both the hospital inpatient PPS and the physician fee schedule, and hospital payments for surgical care are not adjusted when an assistant receives payment under the physician fee schedule, Medicare may be paying too much for some hospital surgical care. Second, paying a health professional under the Medicare physician fee schedule to be an assistant-at-surgery, instead of including this payment in an all-inclusive payment, gives neither the hospital nor the surgeon an incentive to use an assistant only when one is medically necessary. Third, the distinctions between those health professionals eligible for payment as an assistant-at- surgery under the physician fee schedule and those who are not eligible are not based on surgical education or experience as an assistant. Criteria for determining who should be paid as assistants-at-surgery under the physician fee schedule do not exist. However, hospitals are responsible under health and safety rules to provide quality care for their patients. Medicare’s policy of paying hospitals for the services associated with inpatient surgical care that may include assistant-at-surgery services and also paying physicians and certain nonphysician health professionals for those services is flawed. When Medicare pays under the hospital inpatient PPS and under the physician fee schedule for assistant-at-surgery services delivered to a particular patient, Medicare may pay too much for the assistant services because the hospital is not paid less when the assistant receives payment under the physician fee schedule. In addition, a hospital that uses an assistant-at-surgery who is eligible for payment under the physician fee schedule has a financial advantage in the form of lower labor costs over a hospital that uses assistants who cannot be paid under the physician fee schedule. Given the discretion that hospitals and surgeons have in determining when and how an assistant-at-surgery is used, it is especially important that Medicare’s payment policy create incentives to help ensure that assistant services are provided for Medicare patients only when medically necessary. Allowing physician fee schedule payments to certain assistants-at-surgery, however, creates an incentive for hospitals to use them, rather than those who cannot be paid under the fee schedule. Because neither the hospital nor the surgeon incurs a cost when an assistant-at-surgery is paid under the physician fee schedule, neither has a financial incentive to use an assistant only when one is necessary. The lack of this incentive is of concern because assistant-at-surgery services receive little review to determine the medical necessity of the services. A 2001 report by the Department of Health and Human Services Office of Inspector General found that most contractors used by Medicare to pay for part B services do not have any mechanism to ensure that assistant-at- surgery requests for payment for nonphysician health professionals are reviewed for medical necessity before they are paid. Medicare routinely requires submission of documentation of medical necessity for medical review for only 1 percent of assistant-at-surgery services paid under the physician fee schedule. Because the requirements for those authorized to be paid as assistants-at- surgery under the Medicare physician fee schedule do not include assistant-at-surgery education or experience, payments can be made to assistants with no such education or experience. For example, about 23 percent of physician assistants work in surgical specialties. Other physician assistants working in nonsurgical specialties, however, may be paid as assistants-at-surgery under the Medicare physician fee schedule, and their only surgical experience may be a 6-week surgical rotation. On the other hand, nonphysician health professionals, such as surgical technologists, CRNFAs, and orthopedic physician assistants, all of whom have certification programs requiring education and experience as an assistant-at-surgery, cannot be paid by Medicare for their services under the physician fee schedule. One way to address a concern associated with the physician fee schedule payments for assistants-at-surgery is to expand the number of nonphysician health professions eligible for payment. But this would not ensure that only those with the appropriate education and experience serve as assistants-at-surgery unless CMS also sets standards for all those who serve as assistants. There is no consensus, however, on what such standards should include. Bundling all payments for assistants-at-surgery into either the inpatient hospital PPS or the surgeon’s global fee would address the flaws of the current payment system. The possibility of paying too much for assistant- at-surgery services would be eliminated because Medicare would make only one payment—to either the hospital or the surgeon—for the service. The hospital or surgeon would have a financial incentive to use the most appropriate assistant-at-surgery—and to use one only when necessary— because the payment would be the same regardless of whether an assistant was used. The lack of a relationship between the nonphysician health professionals eligible for assistant-at-surgery payments under the physician fee schedule and their education and experience would be moot because payments would no longer be made to individuals performing the role; payments would be made, as part of a larger payment for a bundle of services, to hospitals or surgeons, who would have the responsibility to determine the education and experience that an assistant-at-surgery needs and when an assistant is needed. Folding payments for assistant-at-surgery services into inpatient PPS payments has some advantages that would not accrue if payments were folded into the surgeon’s global fee. Hospitals would continue to have incentives to use assistants-at-surgery when they are necessary, and to use the most appropriate assistant. Hospitals are already responsible—under the hospital CoP—for ensuring the health and safety of their patients and that necessary services are provided, including assistant-at-surgery services. Most hospitals already have credentialing processes for their employees. Also, since hospitals likely employ most assistants-at-surgery, limiting payments for assistant services to those made under the inpatient PPS would disrupt the employment relationships for far fewer assistants than would be the case if payment was made to surgeons. There is precedent for Congress approving legislation that no longer allows a service to be paid for separately under part B, but instead requires that the service be included in a bundle of services under part A. In 1997, Congress passed legislation that requires virtually all kinds of services or items furnished to beneficiaries residing in skilled nursing facilities (SNF) that had been paid for separately under part B, instead be included in a bundle of services paid for under part A. Prior to implementation of the provision, SNFs could permit a nonphysician health professional or supplier to seek payment under part B for ancillary services or items furnished directly to SNF residents, as long as the SNF did not include the service or item in its part A bill. The legislation, however, prevents this “unbundling” by including in Medicare SNF PPS payments ancillary services or items a SNF resident may require that previously had been paid under part B. Bundling assistant-at-surgery services into the package of services covered by the surgeon’s global payment based on the Medicare physician fee schedule has significant drawbacks. First, because the amount paid under the inpatient hospital PPS for assistants-at-surgery is unknown, the total amount to be added to the physician fee schedule for providing assistants is unknown. Second, a payment amount for assistant-at-surgery services would have to be determined for each surgical procedure. Since data are not collected on how often each surgeon uses assistants-at-surgery for each surgical procedure, the bundled payment would presumably include an allotment for the expected average cost of assistants for all surgeons performing the procedure. Using this approach, surgeons with an unusually high number of procedures requiring assistants would be paid too little, while those with an unusually low number of procedures requiring assistants would be paid too much. In addition, a surgeon would have a financial incentive to use an assistant-at-surgery less frequently for surgical procedures for which ACS says that an assistant may be needed, even when the condition of the beneficiary indicates that an assistant would be desirable. Because there is a difference in costs to a surgeon depending on whether an assistant-at-surgery is used, a surgeon’s bundled payment amount could be adjusted when an assistant is used. Doing so, however, would provide no financial incentive for surgeons to use an assistant-at-surgery only when one is medically necessary. Decisions to use an assistant-at-surgery should not be influenced by payment; they should be based on medical necessity. The majority of assistants-at-surgery are likely employed by hospitals, where the inpatient hospital PPS pays for their services. If Congress were to consolidate Medicare physician fee schedule payments for assistant-at-surgery services into the inpatient hospital PPS, this would give hospitals an incentive to use assistants only when they are necessary. Meanwhile, the hospital CoP would continue to give hospitals an incentive to assure that the most appropriate assistants-at-surgery are used as part of their responsibility to provide quality care for their patients. Paying for assistants under the physician fee schedule provides no such incentive. We suggest that Congress may wish to consider consolidating all Medicare payments for assistant-at-surgery services under the hospital inpatient prospective payment system. We received comments on a draft of this report from CMS, which agreed that payment policy for assistants-at-surgery could be improved. CMS noted that it would be helpful to describe the ongoing review process that CMS uses to assign relative values to physician fee schedule services. However, as we state in this report assistants-at-surgery are not paid on the basis of the resources they use to perform their work, but are instead paid a percentage of the amount paid the surgeon. CMS also discussed several details related to implementing payment changes for assistants-at- surgery. Addressing these points was beyond the scope of this report. CMS’s comments appear in appendix II. In addition, we obtained oral comments on a draft of this report from representatives of the American Medical Association, the American College of Surgeons, the American Society of General Surgeons, the American Association of Orthopaedic Surgeons, the Society of Thoracic Surgeons, the American Academy of Nurse Practitioners, the American Academy of Physician Assistants, the Association of periOperative Registered Nurses, and the American Hospital Association. We have modified the report, as appropriate, in response to their comments. We are sending copies of this report to the Acting Administrator of CMS, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. This report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please call me at (202) 512-7101. Lisanne Bradley and Michael Rose were major contributors to this report. | Medicare pays for assistant-atsurgery services under both the hospital inpatient prospective payment system and the physician fee schedule. Payments under the physician fee schedule are limited to a few health professions. In 2001, Congress directed GAO to report on the potential impact on the Medicare program of allowing physician fee schedule payments to Certified Registered Nurse First Assistants for assistant-at-surgery services. This report examines: (1) who serves as an assistant-atsurgery, (2) whether health professionals who perform the role must meet a uniform set of professional requirements, and (3) whether Medicare's payment policies for assistants-at-surgery are consistent with the goals of the program and, if not, whether there are alternatives that would help attain those goals. GAO analyzed information provided by physician and other health professional associations and Medicare payment data. Members of a wide range of health professions serve as assistants-at-surgery, including physicians, residents in training for licensure or board certification in a physician specialty, several different kinds of nurses, and members of several other health professions. Hospitals employ all the types of nonphysician health professionals who perform the role. Hospital employees likely serve as assistants-at-surgery for a majority of the procedures for which the American College of Surgeons says an assistant is "almost always" necessary. The number of assistant-at-surgery services performed by physicians and paid under the Medicare physician fee schedule has declined, while the number of such services performed by nonphysician health professionals eligible to receive payment under the physician fee schedule has increased. There is no widely accepted set of uniform requirements for experience and education that the health professionals who serve as assistants-at-surgery are required to meet. The health professions whose members provide assistant-at-surgery services have varying educational requirements. No state licenses all the health professionals who serve as assistants-at-surgery. Furthermore, the certification programs developed by the various nonphysician health professional groups whose members assist at surgery differ. GAO found that there was insufficient information about the quality of care provided by assistants-at-surgery generally, or by a specific type of health professional, to assess the adequacy of the requirements for members of a particular profession to perform the role. There are three flaws in Medicare's policies for paying assistants-at-surgery that prevent the payment system from meeting the program's goals of making appropriate payment for medically necessary services by qualified providers. First, because Medicare pays for assistant-at-surgery services under both the hospital inpatient prospective payment system and the physician fee schedule, and hospital payments for surgical care are not adjusted when an assistant receives payment under the physician fee schedule, Medicare may be paying too much for some hospital surgical care. Second, paying a health professional under the physician fee schedule to be an assistant-at-surgery, instead of including this payment in an all-inclusive payment, gives neither the hospital nor surgeon an incentive to use an assistant only when one is medically necessary. Third, the distinctions between those health professionals eligible for payment as an assistant-at-surgery under the physician fee schedule and those who are not eligible are not based on surgical education or experience as an assistant. Criteria for determining who should be paid as assistants-at-surgery under the physician fee schedule do not exist. However, hospitals are responsible under health and safety rules to provide quality care for their patients. |
The tax gap is an estimate of the difference between the taxes—including individual income, corporate income, employment, estate, and excise taxes—that should have been timely and accurately paid and what was actually paid for a specific year. The estimate is an aggregate of estimates for the three primary types of noncompliance: (1) underreporting of tax liabilities on tax returns; (2) underpayment of taxes due from filed returns; and (3) nonfiling, which refers to the failure to file a required tax return altogether or timely. Estimates for each type of noncompliance include estimates for some or all of the five types of taxes that IRS administers. IRS develops its tax gap estimates by measuring the rate of taxpayer compliance—the degree to which taxpayers fully and timely complied with their tax obligations. That rate is then used, along with other data and assumptions, to estimate the dollar amount of taxes not timely and accurately paid. For instance, IRS recently estimated that for tax year 2001, from 83.4 percent to 85 percent of owed taxes were paid voluntarily and timely, which translated into an estimate gross tax gap from $312 billion to $353 billion in taxes not paid that should have been. IRS also estimates the amount of the unpaid taxes that it will recover through enforcement and other actions and subtracts that to estimate the net annual tax gap. For tax year 2001, IRS estimated that it would eventually recover about $55 billion for a net tax gap from $257 billion to $298 billion. IRS has estimated the tax gap on multiple occasions, beginning in 1979. IRS’s earlier tax gap estimates relied on the Taxpayer Compliance Measurement Program (TCMP), through which IRS periodically performed line-by-line examinations of randomly selected tax returns. TCMP started with tax year 1963 and examined individual returns most frequently— generally every 3 years—through tax year 1988. IRS contacted all taxpayers selected for TCMP studies. IRS did not implement any TCMP studies after 1988 because of concerns about costs and burdens on taxpayers. Recognizing the need for current compliance data, in 2002 IRS implemented a new compliance study called the National Research Program (NRP) to produce such data while minimizing taxpayer burden. Under NRP, a program that we have encouraged, IRS recently completed its initial review of about 46,000 randomly selected individual tax returns from tax year 2001. Unlike with TCMP studies, IRS did not need to contact taxpayers for every tax return selected under NRP; handled some taxpayer contacts through correspondence, as opposed to face-to-face examinations; and during face-to-face examinations, generally only asked taxpayers to explain information that it was otherwise unable to verify through IRS and third-party databases. IRS has a strategic planning process through which it supports decisions about strategic goals, program development, and resource allocation. Under the Government Performance and Results Act of 1993 (GPRA), agencies are to develop strategic plans as the foundation for results- oriented management. GPRA requires that agency strategic plans identify long-term goals, outline strategies to achieve the goals, and describe how program evaluations were used to establish or revise the goals. GPRA requires federal agencies to establish measures to determine the results of their activities. The nation is facing a range of important new forces that are already working to reshape American society, our place in the world, and the role of the federal government. Our capacity to address these and other emerging needs and challenges will be predicated on when and how we deal with our fiscal challenges—the long-term fiscal pressures we face are daunting and unprecedented in the nation’s history. As this committee is well aware, the size and trend of our projected longer-term deficits means that the nation cannot ignore the resulting fiscal pressures—it is not a matter of whether the nation deals with the fiscal gap, but when and how. Unless we take effective and timely action, our near-term and longer-term deficits present the prospect of chronic and seemingly perpetual budget shortfalls and constraints becoming a fact of life for years to come. Not only would continuing deficits eat away at the capacity of everything the government does, but they will erode our ability to address the wide range of emerging needs and demands competing for a share of a shrinking budget pie. Our long-term simulations illustrate the magnitude of the fiscal challenges we will face in the future. Figures 1 and 2 present these simulations under two different sets of assumptions. In the first, we begin with CBO's January 2005 baseline—-constructed according to the statutory requirements for that baseline. Consistent with these requirements, discretionary spending is assumed to grow with inflation for the first 10 years and tax cuts scheduled to expire are assumed to expire. After 2015, discretionary spending is assumed to grow with the economy, and revenue is held constant as a share of gross domestic product (GDP) at the 2015 level. In the second figure, two assumptions are changed: (1) discretionary spending is assumed to grow with the economy after 2005 rather than merely with inflation and (2) all temporary tax cuts are extended. For both simulations, Social Security and Medicare spending is based on the 2005 Trustees' intermediate projections, and we assume that benefits continue to be paid in full after the trust funds are exhausted. As both these simulations illustrate, absent policy changes on the spending or revenue side of the budget, the growth in spending on federal retirement and health entitlements will encumber an escalating share of the government's resources. Indeed, when we assume that recent tax reductions are made permanent and discretionary spending keeps pace with the economy, our long-term simulations suggest that by 2040 federal revenues may be adequate to pay little more than interest on the federal debt. Neither slowing the growth in discretionary spending nor allowing the tax provisions to expire—nor both together—would eliminate the imbalance. Although revenues will be part of the debate about our fiscal future, assuming no further borrowing, making no changes to Social Security, Medicare, Medicaid, and other drivers of the long-term fiscal gap would require at least a doubling of taxes, and that seems highly implausible. Such significant tax increases would also likely have an adverse effect on economic growth and disposable income available to Americans. Accordingly, substantive reform of Social Security and our major health programs remains critical to recapturing our future fiscal flexibility. Although considerable uncertainty surrounds long-term budget projections, we know two things for certain: the population is aging and the baby boom generation is approaching retirement age. The aging population and rising health care spending will have significant implications not only for the budget but also the economy as a whole. Figure 3 shows the total future draw on the economy represented by Social Security, Medicare, and Medicaid. Under the 2005 Trustees' intermediate estimates and CBO's long-term Medicaid estimates, spending for these entitlement programs combined will grow to 15.2 percent of GDP in 2030 from today's 8.5 percent. It is clear that taken together Social Security, Medicare, and Medicaid represent an unsustainable burden on future generations. Early action to change these programs would yield the highest fiscal dividends for the federal budget and would provide a longer period for prospective beneficiaries to make adjustments in their own planning. Waiting to build economic resources and reform future claims entails risks. First, we lose an important window during which today's relatively large workforce can increase saving and enhance productivity, two elements critical to growing the future economy. We also lose the opportunity to reduce the burden of interest in the federal budget, thereby creating a legacy of higher debt as well as elderly entitlement spending for the relatively smaller workforce of the future. Most critically, we risk losing the opportunity to phase in changes gradually so that all can make the adjustments needed in private and public plans to accommodate this historic shift. Unfortunately, the long-range challenge has become more difficult, and the window of opportunity to address the entitlement challenge is narrowing. Confronting the nation’s fiscal challenge will require nothing less than a fundamental review, reexamination, and reprioritization of all major spending and tax policies and programs that may take a generation or more to resolve. Traditional incremental approaches to budgeting will need to give way to more fundamental and periodic reexaminations of the base of government. Many, if not most, current federal programs and policies were designed decades ago to respond to trends and challenges that existed at the time of their creation. If government is to respond effectively to 21st century trends, it cannot accept what it does, how it does it, who does it, and how it gets financed as “given.” Not only do outmoded commitments, operations, choices of tools, management structures, and tax programs and policies constitute a burden on future generations, but they also erode the government's capacity to align itself with the needs and demands of the 21st century. Reexamining the base of government will be a challenging task, and we at GAO believe we have an obligation to assist and support Congress in this endeavor. To that end, we recently issued a report that provides examples of the kinds of difficult choices the nation faces with regard to discretionary spending; mandatory spending, including entitlements; as well as tax policies and compliance activities. Regarding tax policy, a debate is under way about the future of our tax system that is partly about whether the goals for the nation's tax system can be best achieved using the current structure or a fundamentally reformed tax structure. The debate is also motivated by increasing globalization, the growing complexity of our tax system, and the growing use of tax preferences whose aggregate revenue loss has exceeded all discretionary spending in 5 of the past 10 years. Although outside the scope of this hearing, today's pressing tax challenges raise important questions. For example: Given our current tax system, what tax rate structure is more likely to raise sufficient revenue to fund government and satisfy the public's perception of fairness? Which tax preferences need to be reconsidered because they fail to achieve the objectives intended by the Congress, their costs outweigh their benefits, they duplicate other programs, or other more cost effective means exist for achieving their objectives? Should the basis of the existing system be changed from an income to a consumption base? Would such a change help respond to challenges posed by demographic, economic, and technological changes? How would such a change affect savings and work incentives? How would reforms address such issues as the impact on state and local tax systems and the distribution of burden across the nation's taxpayers? Regarding compliance with our tax laws, the success of our tax system hinges greatly on the public's perception of its fairness and understandability. Compliance is influenced not only by the effectiveness of IRS's enforcement efforts but also by Americans' attitudes about the tax system and their government. A recent survey indicated that about 12 percent of respondents say it is acceptable to cheat on their taxes. Furthermore, the complexity of, and frequent revisions to, the tax system make it more difficult and costly for taxpayers who want to comply to do so and for IRS to explain and enforce tax laws. Complexity also creates a fertile ground for those intentionally seeking to evade taxes and often trips others into unintentional noncompliance. The lack of transparency also fuels disrespect for the tax system and the government. Thus, a crucial challenge for reexamination will be to determine how we can best strengthen enforcement of existing laws to give taxpayers confidence that their friends, neighbors, and business competitors are paying their fair share. We have long been concerned about tax noncompliance and IRS efforts to address it. Collection of unpaid taxes was included in our first high-risk series report in 1990, with a focus on the backlog of uncollected debts owed by taxpayers. In 1995, we added Filing Fraud as a separate high-risk area, narrowing the focus of that high-risk area in 2001 to Earned Income Credit Noncompliance because of the particularly high incidence of fraud and other forms of noncompliance in that program. We expanded our concern about the Collection of Unpaid Taxes in our 2001 high-risk report to include not only unpaid taxes (including tax evasion and unintentional noncompliance) known to IRS, but also the broader enforcement issue of unpaid taxes that IRS has not detected. In our high-risk update that we issued in January, we consolidated these areas into a single high-risk area—Enforcement of the Tax Laws—because we believe the focus of concern on the enforcement of tax laws is not confined to any one segment of the taxpaying population or any single tax provision. Tax law enforcement is a high-risk area in part because past declines in IRS’s enforcement activities threatened to erode taxpayer compliance. In recent years, the resources IRS has been able to dedicate to enforcing the tax laws have declined. For example, the number of revenue agents (those who examine complex returns), revenue officers (those who perform field collection work), and special agents (those who perform criminal investigations) decreased over 21 percent from 1998 through 2003. However, IRS achieved some staffing gains in 2004 and expects modest gains in 2005. IRS’s proposal for fiscal year 2006, if funded and implemented as planned, would return enforcement staffing in these occupations to their highest levels since 1999. Concurrently, IRS's enforcement workload—measured by the number of taxpayer returns filed—has continually increased. For example, from 1997 through 2003, the number of individual income tax returns filed increased by about 8 percent. Over the same period, returns for high income individuals grew by about 81 percent. Due to their income levels, IRS believes that these individuals present a particular compliance risk. In light of declines in enforcement staffing and the increasing number of returns filed, nearly every indicator of IRS's coverage of its enforcement workload has declined in recent years. Although in some cases workload coverage has begun to increase, overall IRS's coverage of known workload is considerably lower than it was just a few years ago. Figure 4 shows the trend in examination rates—the proportion of tax returns that IRS examines each year—for field, correspondence, and total examinations since 1995. Field examinations involve face-to-face examinations and correspondence examinations are typically less comprehensive and complex, involving communication through written notices. IRS experienced steep declines in examination rates from 1995 to 1999, but the examination rate has slowly increased since 2000. However, as the figure shows, the increase in total examination rates of individual filers has been driven mostly by correspondence examinations, while more complex field examinations continue to decline. On the collection front, IRS’s use of enforcement sanctions, such as liens, levies, and seizures, dropped precipitously during the mid- and late 1990s. In fiscal year 2000, IRS’s use of these three sanctions was at 38 percent, 7 percent, and 1 percent, respectively, of fiscal year 1996 levels. However, beginning in fiscal year 2001, IRS’s use of liens and levies began to increase. By fiscal year 2004, IRS’s use of liens, levies, and seizures reached 71 percent, 65 percent, and 4 percent of 1996 levels, respectively. Further, IRS's workload has grown ever more complex as the tax code has grown more complex. IRS is challenged to administer and explain each new provision, thus absorbing resources that otherwise might be used to enforce the tax laws. Concurrently, other areas of particularly serious noncompliance have gained the attention of IRS and Congress, such as abusive tax shelters and schemes employed by businesses and wealthy individuals that often involve complex transactions that may span national boundaries. Given the broad declines in IRS's enforcement workforce, IRS's decreased ability to follow up on suspected noncompliance, and the emergence of sophisticated evasion concerns, IRS is challenged in attempting to ensure that taxpayers fulfill their obligations. IRS is working to further improve its enforcement efforts. In addition to recent favorable trends in enforcement staffing, correspondence examinations, and the use of some enforcement sanctions, IRS has recently made progress with respect to abusive tax shelters through a number of initiatives and recent settlement offers that have resulted in billions of dollars in collected taxes, interest, and penalties. In addition, IRS is developing a centralized cost accounting system, in part to obtain better cost and benefit information on compliance activities, and is modernizing the technology that underpins many core business processes. It has also redesigned some compliance and collections processes and plans additional redesigns as technology improves. Finally, the recently completed NRP study of individual taxpayers not only gives us a benchmark of the status of taxpayers’ compliance but also gives IRS a better basis to target its enforcement efforts. However, IRS’s preliminary compliance estimate based on NRP indicates that compliance has not improved and may be worse than IRS originally estimated. As such, sustained progress toward improving compliance is needed. Reducing the tax gap would be a step toward improving our fiscal sustainability while simultaneously enhancing fairness for those citizens who meet their tax obligations. That said, reducing the tax gap is a challenging task, and closing the entire tax gap is not practical. Reducing the tax gap will not likely be achieved through a single solution, but will likely involve multiple strategies that include reducing tax code complexity, providing quality services to taxpayers, and enhancing enforcement of the tax laws through the use of tools such as tax withholding and information reporting that increase the transparency of income and deductions to both IRS and taxpayers. Also, as IRS moves forward in continuing to address the tax gap, building and maintaining a base of information on the extent of, and reasons for, noncompliance as well as defining desired changes in the tax gap and measuring results of efforts to address it will be critical. Given its size, even small or moderate reductions in the net tax gap could yield substantial returns. For example, based on IRS’s most recent estimate, each 1 percent reduction in the net tax gap would likely yield more than $2.5 billion annually. Thus, a 10 percent to 20 percent reduction of the net tax gap would translate into from $25 billion to $50 billion or more in additional revenue annually. Although reducing the tax gap may be an attractive means to improve the nation’s fiscal position, achieving this end will be a challenging task given persistent levels of noncompliance. IRS has made efforts to reduce the tax gap since the early 1980s; yet the tax gap is still large—although without these efforts it could be even larger. Also, IRS is challenged in reducing the tax gap because the tax gap is spread across the five different types of taxes that IRS administers, and a substantial portion of the tax gap is attributed to taxpayers who are not subject to withholding or information reporting requirements. Moreover, as we have reported in the past, closing the entire tax gap may not be feasible nor desirable, as it could entail more intrusive recordkeeping or reporting than the public is willing to accept or more resources than IRS is able to commit. Although much of the tax gap that IRS currently recovers is through enforcement actions, a sole focus on enforcement will not likely be sufficient to further reduce the net tax gap. Rather, the tax gap must be attacked on multiple fronts and with multiple strategies on a sustained basis. For example, efforts to simplify the tax code and otherwise alter current tax policies may help reduce the tax gap by making it easier for individuals and business to understand and voluntarily comply with their tax obligations. For instance, reducing the multiple tax preferences for retirement savings or education assistance might ease taxpayers’ burden in understanding and complying with the rules associated with these options. Also, simplification may reduce opportunities for tax evasion through vehicles such as abusive tax shelters. However, for any given set of tax policies, IRS’s efforts to reduce the tax gap and ensure appropriate levels of compliance will need to be based on a balanced approach of providing service to taxpayers and enforcing the tax laws. Furthermore, providing quality services to taxpayers is an important part of any overall strategy to improve compliance and thereby reduce the tax gap. As we have reported in the past, one method of improving compliance through service is to educate taxpayers about confusing or commonly misunderstood tax requirements. For example, if the forms and instructions taxpayers use to prepare their taxes are not clear, taxpayers may be confused and make unintentional errors. One method to ensure that forms and instructions are sufficiently clear is to test them before use. However, we reported in 2003 that IRS had tested revisions to only five individual forms and instructions from July 1997 through June 2002, although hundreds of forms and instructions had been revised in 2001 alone. Finally, in terms of enforcement, IRS will need to use multiple strategies and techniques to find noncompliant taxpayers and bring them into compliance. However, a pair of tools has been shown to lead to high levels of compliance: withholding tax from payments to taxpayers and having third parties report information to IRS and the taxpayers on income paid to taxpayers. For example, banks and other financial institutions provide information returns (Forms 1099) to account holders and IRS showing the taxpayers’ annual income from some types of investments. Similarly, most wages, salaries, and tip compensation are reported by employers to employees and IRS through Form W-2. Preliminary findings from NRP indicate that more than 98.5 percent of these types of income are accurately reported on individual returns. In the past, we have identified a few potential areas where additional withholding or information reporting requirements could serve to improve compliance: Requiring tax withholding and more or better information return reporting on payments made to independent contractors. Past IRS data have shown that independent contractors report 97 percent of the income that appears on information returns, while contractors that do not receive these returns report only 83 percent of income. We have also identified other options for improving information reporting for independent contractors, including increasing penalties for failing to file required information returns, lowering the $600 threshold for requiring such returns, and requiring businesses to separately report on their tax returns the total amount of payments to independent contractors. Requiring information return reporting on payments made to corporations. Unlike payments made to sole proprietors, payments made to corporations for services are generally not required to be reported on information returns. IRS and GAO have contended that the lack of such a requirement leads to lower levels of compliance for small corporations. Although Congress has required federal agencies to provide information returns on payments made to contractors since 1997, payments made by others to corporations are generally not covered by information returns. Require more data on information returns dealing with capital gain income. Past IRS studies have indicated that much of the noncompliance associated with capital gains is a result of taxpayers overstating an asset’s “basis,” the amount of money originally paid for the asset. Currently, financial institutions are required to report the sales prices, but not the purchase prices, of stocks and bonds on information returns. Without information on purchase prices, IRS cannot use efficient and effective computer-matching programs to check for compliance and must use much more costly means to examine taxpayer returns in order to verify capital gain income. Although withholding and information returns are highly effective in encouraging compliance, such additional requirements generally impose costs and burdens on the businesses that must implement them. However, continued reexamination of opportunities to expand information reporting and tax withholding could increase the transparency of the tax system. Such reexamination could be especially relevant toward improving compliance in areas that are particularly complex or challenging to administer, such as noncash charitable contributions or net income and losses passed through from “flow-through” entities such as S corporations and partnerships to their shareholders and partners. Finally, making progress on closing the tax gap requires that the tools and techniques being used to promote compliance are evaluated to ensure that they actually are effective. IRS evaluates some of its efforts to assess how well they work, perhaps most notably its current effort to test new procedures designed to reduce noncompliance with the Earned Income Tax Credit, but misses other opportunities. For example, the lack of testing for forms and instructions mentioned earlier is one instance where improved evaluation would be worthwhile. We also reported in 2002 that the effectiveness of the Federal Tax Deposit Alert program—a program that since 1972 has been intended to reduce delinquencies in paying employment taxes—could not be evaluated because IRS had no system to track contacts IRS made with delinquent employers. The availability of current compliance information should enhance IRS’s ability to evaluate the success of its efforts to promote compliance. Regularly measuring compliance can offer many benefits, including helping IRS identify new or major types of noncompliance, identify changes in tax laws and regulations that may improve compliance, more effectively target examinations of tax returns or other enforcement programs, understand the effectiveness of its programs to promote and enforce compliance, and determine its resource needs and allocations. For example, by analyzing 1979 and 1982 TCMP data, IRS identified significant noncompliance with the number of dependents claimed on tax returns and justified a legislative change to address the noncompliance. As a result, for tax year 1987 taxpayers claimed about 5 million fewer dependents on their returns than would have been expected without the change in law. In addition, tax compliance data are useful outside of IRS for tax policy analysis, revenue estimating, and research. A significant portion of IRS’s new tax gap estimate is based on recent compliance data. IRS used data from NRP to update individual income tax underreporting and the portion of individual employment tax underreporting from self-employed individuals. Completion of NRP is a substantial achievement—as table 1 indicates, underreporting of individual income taxes represented about half of the tax gap for 2001 (the estimate ranges from $150 billion to $187 billion out of a gross tax gap estimate that ranges from $312 billion to $353 billion). Also, from $51 billion to $56 billion of the $66 billion to $71 billion in estimated underreported employment tax was due to self-employment tax underreporting. IRS used current, actual data from its Master Files to calculate the underpayment segment of the tax gap. IRS has concerns with the certainty of the overall tax gap estimate in part because some areas of the estimate rely on old data and IRS has no estimates for other areas of the tax gap. IRS does not have estimates for corporate income, employment, and excise tax nonfiling or for excise tax underreporting. For these types of noncompliance, IRS maintains that the data are either difficult to collect, imprecise, or unavailable. IRS has not recently collected compliance data for the remaining segments of the tax gap. For example, IRS used data from the 1970s and 1980s to estimate underreporting of corporate income taxes and employer-withheld employment taxes. IRS is taking several steps that could improve the tax gap estimate for tax year 2001. IRS plans to further analyze the preliminary results from NRP and expects to publish a revised estimate by the end of 2005. The revised estimate will incorporate new methodologies, including those for estimating overall individual income tax underreporting as well as for the portion attributable to self-employed individuals who operate businesses informally, and for estimating individual income tax nonfiling. In addition, IRS research officials have proposed a compliance measurement study that will allow IRS to update underreporting estimates involving flow-through entities. This study, which IRS intends to begin in fiscal year 2006, would take 2 to 3 years to complete. Because either individual taxpayers or corporations may be recipients of income (or losses) from flow-through entities, this study could affect IRS’s estimates for the underreporting gap for individual and corporate income tax. While these data and methodology updates could improve the tax gap estimates, IRS has no documented plans to periodically collect more or better compliance data over the long term. Other than the proposed study of flow-through entities, IRS does not have plans to collect compliance data for other segments of the tax gap. Also, IRS has indicated that given its current research priorities, it would not begin another NRP study of individual income tax returns before 2008, if at all, and would not complete such a study until at least 2010. When IRS initially proposed the NRP study, it had planned to study individual income tax underreporting on a 3-year cycle. According to IRS officials, IRS has not committed to regularly collecting compliance data because of the associated costs and burdens. Taxpayers whose returns are examined through compliance studies such as NRP bear costs in terms of time and money. Also, IRS incurs costs, including direct costs and opportunity costs—revenue that IRS potentially forgoes by using its resources to examine randomly selected returns, which may include returns from compliant taxpayers, as opposed to traditional examinations that focus on taxpayer returns that likely contain noncompliance and may more consistently produce additional tax assessments. Although the costs and burdens of compliance measurement are legitimate concerns, as we have reported in the past, we believe compliance studies to be good investments. Without current compliance data, IRS is less able to determine key areas of noncompliance to address and actions to take to maximize the use of its limited resources. The lack of firm plans to continually obtain fresh compliance data is troubling because the frequency of data collection can have a large impact on the quality and utility of compliance data. As we have reported in the past, the longer the time between compliance measurement surveys, the less useful they become given changes in the economy and tax law. In designing the NRP study, IRS balanced the costs, burdens, and compliance risk of studying that area of the tax gap. Any plans for obtaining and maintaining reasonably current information on compliance levels for all portions of the tax gap would similarly need to take into account costs, burdens, and compliance risks in determining which areas of compliance to measure and the scope and frequency of such measurement. Data on whether taxpayers are unintentionally or intentionally noncompliant with specific tax provisions are critical to IRS for deciding whether its efforts to address specific areas of noncompliance should focus on nonenforcement activities, such as improved forms or publications, or enforcement activities to pursue intentional noncompliance. Recognizing such benefits, the National Taxpayer Advocate has urged IRS to consider performing additional research into causes of noncompliance. We have also reported in the past that rigorous research of the causes of noncompliance seems intuitive. IRS collects data on the reasons for noncompliance for specific tax issues during its examinations of tax returns, including those reviewed for NRP. However, IRS has a number of concerns with the data: The database is incomplete as not all examiners have been sending information on the results, including reasons, of closed examinations to be entered into the database. IRS has not tested the adequacy of the controls for data entry or the reliability of the data being collected. IRS has found instances where examiners close examinations without assigning a reason for noncompliance or by assigning the same reason to all instances of noncompliance, regardless of the situation. IRS has not trained all examiners to deal with the subjectivity of determining reasons to ensure consistent understanding of the reason categories. The data are not representative of the population of noncompliant taxpayers because the examined tax returns were not selected randomly. As IRS continues to collect data on the reasons for noncompliance in the future, it will be important to take these concerns into account. Additionally, as with its efforts to measure compliance, it will be important for IRS to consider the costs and burden of obtaining data on the reasons for noncompliance. Focusing on outcome-oriented goals and establishing measures to assess the actual results, effects, or impact of a program or activity compared to its intended purpose can help agencies improve performance and stakeholders determine whether programs have produced desired results. As such, establishing long-term, quantitative compliance goals offers several benefits for IRS. Perhaps most important, compliance goals coupled with periodic measurements of compliance levels would provide IRS with a better basis for determining to what extent its various service and enforcement efforts contribute to compliance. Additionally, long-term, quantitative goals may help IRS consider new strategies to improve compliance, especially since these strategies could take several years to implement. For example, IRS’s progress toward the goal of having 80 percent of all individual tax returns electronically filed by 2007 has required enhancement of its technology, development of software to support electronic filing, education of taxpayers and practitioners, and other steps that could not be completed in a short time frame. Focusing on intended results can also promote strategic and disciplined management decisions that are more likely to be effective because managers who use fact-based performance analysis are better able to target areas most in need of improvement and select appropriate interventions. Likewise, agency accountability can be enhanced when both agency management and external stakeholders such as Congress can readily measure an agency’s progress toward meeting its goals. Finally, setting long-term, quantitative goals would be consistent with results-oriented management principles that are associated with high-performing organizations and incorporated into the statutory management framework Congress has adopted through GPRA. IRS’s strategies for improving compliance generally lack a clear focus on long-term, quantitative goals and results measurement. Although IRS has established broad qualitative goals and strategies for improving taxpayer service and enhancing enforcement of the tax laws, it has not specified by how much it hopes these strategies will improve compliance. IRS has also identified measures, such as compliance rates for tax reporting, filing, and payment as well as the percentage of Americans who think it is acceptable to cheat on their taxes, which are intended to gauge the progress of its strategies toward its broad goals. However, IRS does not always collect recent data to update these measures and has not established quantitative goals against which to compare the measures. In response to a President's Management Agenda initiative to better integrate budget and performance information, IRS officials said that they are considering various long-term goals for the agency. These goals are to be released by May 2005. The officials have not indicated how many goals will be related to improving taxpayer compliance or whether they will be quantitative and results- oriented. Not unlike other agencies, IRS faces challenges in implementing a results- oriented management approach, such as identifying and collecting the necessary data to make informed judgments about what goals to set and to subsequently measure its progress in reaching such goals. However, having completed the NRP review of income underreporting by individuals, IRS now has an improved foundation for setting a goal or goals for improving taxpayers’ compliance. Nevertheless, measuring progress toward any goals that may be set could be challenging. For example, IRS researchers have found it difficult to determine the extent to which its enforcement actions deter noncompliance or its services improve compliance among taxpayers who want to comply. Measuring these effects is complicated in part because many factors outside of IRS’s actions can affect compliance. However, as the National Taxpayer Advocate’s 2004 annual report to Congress pointed out, current and existing data on noncompliance may help IRS better understand and address this challenge. Furthermore, even if IRS is unable to show that its actions directly affected compliance rates, periodic measurements of compliance levels can indicate the extent to which compliance is improving or declining and provide a basis for reexamining existing programs and triggering corrective actions if necessary. The nation is currently on an imprudent and unsustainable fiscal path that threatens our future. If we act now to address the looming fiscal challenges facing the nation, the lives of our children and grandchildren will be measurably better than if we wait. Nevertheless, the decisions we must make will not be easy. They involve difficult choices about the role of government in our lives and our economy. Acting now will impose sacrifices, but today we have more options with less severe consequences than if we wait. Reducing the tax gap is one option that would help. While our long term- fiscal imbalance is too large to be eliminated by one strategy, reducing the tax gap can ease the difficult decisions that are needed. But, regardless of the contribution that a reduced tax gap can make to easing our long-term challenges, we need to make concerted efforts to address the tax gap because it is fundamentally unfair and threatens Americans’ trust in their government. The tax gap is both a measure of the burden and frustration of taxpayers who want to comply but are tripped by tax code complexity and of willful tax cheating by a minority who want the benefits of government services without paying their fair share. Chairman Grassley, Senator Baucus, and Members of the Committee, this concludes my testimony. At the request of the committee, in the near future, we will issue a report that addresses the tax gap in greater detail and, as appropriate, may make recommendations related to the topics covered in my statement. We look forward to continuing to support the committee’s oversight of the tax gap and related issues. I would be happy to answer any questions you may have at this time. For further information on this testimony, please contact Michael Brostek on (202) 512-9110 or [email protected]. Individuals making key contributions to this testimony include Jeff Arkin, Elizabeth Fan, Shannon Groff, George Guttman, Michael Rose, and Tom Short. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The Internal Revenue Service's (IRS) recent estimate of the difference between what taxpayers timely and accurately paid in taxes and what they owed ranged from $312 billion to $353 billion for tax year 2001. IRS estimates it will eventually recover some of this tax gap, resulting in a net tax gap from $257 billion to $298 billion. The tax gap arises when taxpayers fail to comply with the tax laws by underreporting tax liabilities on tax returns; underpaying taxes due from filed returns; or "nonfiling," which refers to the failure to file a required tax return altogether or in a timely manner. The Chairman and Ranking Minority Member of the Senate Committee on Finance asked GAO to review a number of issues related to the tax gap. This testimony will address GAO's longstanding concerns regarding tax compliance; IRS's efforts to ensure compliance; and the significance of reducing the tax gap, including some steps that may assist with this challenging task. For context, this testimony will also address GAO's most recent simulations of the long-term fiscal outlook and the need for a fundamental reexamination of major spending and tax policies and priorities. Our nation's fiscal policy is on an unsustainable course. As long-term budget simulations by GAO, the Congressional Budget Office, and others show, over the long term we face a large and growing structural deficit due primarily to known demographic trends and rising health care costs. All simulations indicate that the long-term fiscal challenge is too big to be solved by economic growth alone or by making modest changes to existing spending and tax policies. Rather, a fundamental reexamination of major policies and priorities will be important to recapture our fiscal flexibility. Especially relevant to this committee will be deciding whether and how to change current tax policies and how to ensure that tax compliance is as high as practically possible. Tax law enforcement is one factor affecting compliance that has caused concern in the past, due in part to declines in IRS enforcement occupations, examinations, and other enforcement results. The recent turnaround in staffing and some enforcement results is good news, but IRS's recent compliance estimate indicates that compliance levels have not improved and may be worse than it originally estimated. Thus, sustained progress in improving compliance is needed. Reducing the tax gap would help improve fiscal sustainability, but will be challenging given persistent noncompliance. This task will not likely be achieved through a single solution. Rather, the tax gap must be attacked on multiple fronts and with multiple strategies over a sustained period of time, including reducing tax code complexity, providing quality services to taxpayers, enhancing enforcement of tax laws, and evaluating the success of IRS's efforts to promote compliance. Also important is obtaining current information on the extent of, and reasons for, noncompliance. IRS's 2001 tax gap estimate is based in part on recently collected compliance data for individual income tax underreporting. However, IRS does not have firm plans to obtain compliance data for other areas of the tax gap or again collect data on individual income tax underreporting. Finally, IRS lacks quantitative, long-term goals for improving taxpayer compliance, which would be consistent with results-oriented management. |
In November 2013, FAA released the Roadmap that describes its three- phased approach—Accommodation, Integration, and Evolution— to facilitate incremental steps toward its goal of seamlessly integrating UAS flight in the national airspace. Under this approach, FAA’s initial focus will be on safely allowing for the expanded operation of UASs by selectively accommodating some UAS use. In the integration phase, FAA plans to shift its emphasis toward integrating more UAS use once technology can support safe operations. Finally, in the evolution phase, FAA plans to focus on revising its regulations, policy, and standards based on the evolving needs of the airspace. Currently, FAA authorizes all UAS operations in the NAS—military, public (academic institutions and, federal, state, and local governments including law enforcement organizations), and civil (commercial). Federal, state, and local government agencies must apply for Certificates of while civil operators must apply for Waiver or Authorization (COA),special airworthiness certificates in the experimental category. Civil operators may also apply for an exemption, under section 333 of the 2012 Act, Special Rules for Certain Unmanned Aircraft Systems. This requires the Secretary of Transportation to determine if certain UAS may operate safely in the NAS prior to the completion of UAS rulemakings. This also gives the Secretary the authority to determine whether to allow certain UAS aircraft to operate in the NAS without an airworthiness certification. As we previously reported, research and development continue in areas related to a UAS’s ability to detect and avoid other aircraft, as well as in command and control technologies and related performance and safety standards that would support greater UAS use in the national airspace. Some of this research is being conducted by DOD and NASA. Until this research matures most UAS operations will remain within visual line of sight of the UAS operator. Foreign countries are experiencing an increase in UAS use, and some have begun to allow commercial entities to fly UASs under limited circumstances. According to industry stakeholders, easier access to these countries airspace has drawn the attention of some U.S. companies that wish to test their UASs without needing to adhere to FAA’s administrative requirements for flying UASs at one of the domestically located test sites, or obtaining an FAA COA. As we most recently reported in February 2014, the 2012 Act contained provisions designed to accelerate the integration of UAS into the NAS. These provisions outlined 17 date specific requirements and set deadlines for FAA to achieve safe UAS integration by September 2015 (See app. 1). While FAA has completed several of these requirements, some key ones, including the publication of the final small UAS rule, remain incomplete. As of December 2014, FAA had completed nine of the requirements, was in the process of addressing four, and had not yet made progress on four others. Some stakeholders told us in interviews that FAA’s accomplishments to date are significant and were needed, but these stakeholders noted that the most important provisions of the 2012 Act have been significantly delayed or are unlikely to be achieved by the mandated dates. Both the FAA and UAS industry stakeholders have emphasized the importance of finalizing UAS regulations as unauthorized UAS operations in the national airspace continue to increase and present a safety risk to commercial and general aviation activities. Before publication of a final rule governing small UAS, FAA must first issue a Notice of Proposed Rulemaking (NPRM). As we previously reported, the small UAS rule is expected to establish operating and performance standards for a UAS weighing less than 55 pounds, operating under 400 feet, and within line of sight. FAA officials told us in November 2014 that FAA is hoping to issue the NPRM by the end of 2014 or early 2015. According to FAA, its goal is to issue the final rule 16 months after the NPRM. If this goal is met, the final rule would be issued in late 2016 or early 2017, about two years beyond the requirement of the congressional mandate. However, during the course of our ongoing work, FAA told us that it is expecting to receive tens of thousands of comments on the NPRM. The time needed to respond to such a large number of comments could further extend the time to issue a final rule. FAA officials told us that it has taken a number of steps to develop a framework to efficiently process the comments it expects to receive. Specifically, they said that FAA has a team of employees assigned to lead the effort with contractor support to track and categorize the comments as soon as they are received. According to FAA officials, the challenge of addressing comments could be somewhat mitigated if industry groups consolidated comments, thus reducing the total number of comments that FAA must be addressed while preserving content. During our ongoing work, one industry stakeholder has expressed concern that the small UAS rule may not resolve issues that are important for some commercial operations. This stakeholder expects the proposed rule to authorize operations of small UASs only within visual line of sight of the remote operator and to require the remote operator to have continuous command and control throughout the flight. According to this stakeholder, requiring UAS operators to fly only within their view would prohibit many commercial operations, including large-scale crop monitoring and delivery applications. Furthermore, they formally requested that FAA establish a new small UAS Aviation Rulemaking Committee (ARC) with the primary objective to propose safety regulations and standards for autonomous UAS operations and operations beyond visual line of sight. According to FAA, the existing UAS ARC recently formed a workgroup to study operations beyond visual line of sight in the national airspace and to specifically look at the near- and long-term issues for this technology. In November 2013, FAA completed the required 5-year Roadmap, as well as, the Comprehensive Plan for the introduction of civil UAS into the NAS. The Roadmap was to be updated annually and the second edition of the Roadmap was scheduled to be published in November 2014. Although FAA has met the congressional mandate in the 2012 Act to issue a Comprehensive Plan and Roadmap to safely accelerate integration of civil UAS into the NAS, that plan does not contain details on how it is to be implemented, and it is therefore uncertain how UASs will be safely integrated and what resources this integration will require. The UAS ARC emphasized the need for FAA to develop an implementation plan that would identify the means, necessary resources, and schedule to safely and expeditiously integrate civil UAS into the NAS. According to the UAS ARC the activities needed to safely integrate UAS include: identifying gaps in current UAS technologies, regulations, standards, policies, or procedures; developing new technologies, regulations, standards, policies, and identifying early enabling activities to advance routine UAS operations in the NAS integration, and developing guidance material, training, and certification of aircraft, enabling technologies, and airmen (pilots). FAA has met two requirements in the 2012 Act related to the test sites by setting them up and making a project operational at one location. In our 2014 testimony, we reported that in December 2013, 16 months past the deadline, FAA selected six UAS test ranges. Each of these test sites became operational, during our ongoing work, between April and August 2014, operating under an Other Transaction Agreement (OTA) with FAA. These test sites are affiliated with public entities, such as a university, and were chosen, according to FAA during our ongoing work, based on a number of factors including geography, climate, airspace use, and a proposed research portfolio that was part of the application. Each test site operator manages the test site in a way that will give access to other parties interested in using the site. According to FAA, its role is to ensure each operator sets up a safe testing environment and to provide oversight that guarantees each site operates under strict safety standards. FAA views the test sites as a location for industry to safely access the airspace. FAA told us, during our ongoing work that they expect data obtained from the users of the test ranges will contribute to the continued development of standards for the safe and routine integration of UAS. In order to fly under a COA the commercial entity leases its UAS to the public entity for operation. the research and development supporting integration. According to FAA, it cannot direct the test sites to address specific research and development issues, nor specify what data to provide FAA, other than data required by the COA. FAA officials told us that some laws may prevent the agency from directing specific test site activities without providing compensation. As a result, according to some of the test site operators we spoke to as part of our ongoing work, there is uncertainty about what research and development should be conducted to support the integration process. However, FAA states it does provide support through weekly conference calls and direct access for test sites to FAA’s UAS office. This level of support requires time and resources from the FAA, but the staff believes test sites are a benefit to the integration process and worth this investment. In order to maximize the value of the six test ranges, FAA is working with MITRE Corporation (MITRE), DOD, and the test sites to define what safety, reliability, and performance data are needed and develop a framework, including procedures, for obtaining and analyzing the data. However, FAA has not yet established a time frame for developing this framework. During our ongoing work, test site operators have told us that there needs to be incentives to encourage greater UAS operations at the test sites. FAA is, however, working on providing additional flexibility to the test sites to encourage greater use by industry. Specifically, FAA is willing to train designated airworthiness representatives for each test site. These individuals could then approve UASs for a special airworthiness certificate in the experimental category for operation at the specific test site. Test site operators told us that industry has been reluctant to operate at the test sites because under the current COA process, a UAS operator has to lease its UAS to the test site, thus potentially exposing proprietary technology. With a special airworthiness certificate in the experimental category, the UAS operator would not have to lease their UAS to the test site, therefore protecting any proprietary technology. According to FAA and some test site operators, another flexibility they are working on is a broad area COA that would allow easier access to the test site’s airspace for research and development. Such a COA would allow the test sites to conduct the airworthiness certification, typically performed by FAA, and then allow access to the test site’s airspace. FAA has started to use the authority granted under section 333 of the 2012 Act to allow small UASs access to the national airspace for commercial purposes, after exempting them from obtaining an airworthiness certification. While FAA continues to develop a regulatory framework for integrating small UASs into the NAS these exemptions can help bridge the gap between the current state and full integration. According to FAA, this framework could provide UAS operators that wish to pursue safe and legal entry into the NAS a competitive advantage in the UAS marketplace, thus discouraging illegal operations and improving safety. During our ongoing work, FAA has granted seven section 333 exemptions for the filmmaking industry as of December 4, 2014. FAA officials told us that there were more than 140 applications waiting to be reviewed for other industries, for uses such as precision agriculture and electric power line monitoring, and more continue to arrive. (See figure 1 for examples of commercial UAS operations.) While these exemptions do allow access to the NAS, FAA must review and approve each application and this process takes time, which can affect how quickly the NAS is accessible to any given commercial applicant. According to FAA, the section 333 review process is labor intensive for its headquarters staff because most certifications typically occur in FAA field offices; however, since exemptions under section 333 are exceptions to existing regulations, this type of review typically occurs at headquarters. FAA officials stated that to help mitigate these issues, it is grouping and reviewing similar types of applications together and working to streamline the review process. While FAA is making efforts to improve and accelerate progress toward UAS integration, additional challenges remain, including in the areas of authority, resources, and potential leadership changes. As we reported in February 2014, the establishment of the UAS Integration office was a positive development because FAA assigned an Executive Manager and combined UAS-related personnel and activities from the agency’s Aviation Safety Organization and Air Traffic Organization. However, some industry stakeholders we have interviewed for our ongoing work have expressed concerns about the adequacy of authority and resources that are available to the office. A UAS rulemaking working group, comprised of both government and industry officials, recently recommended that the UAS Integration Office be placed at a higher level within FAA in order to have the necessary authority and access to other FAA lines of business and offices. In addition, according to FAA officials, the Executive Manager’s position may soon be vacant. Our previous work has found that complex organizational transformations involving technology, systems, and retraining key personnel—such as NextGen another FAA major initiative—require substantial leadership commitment over a sustained period. We also found that leaders must be empowered to make critical decisions and held accountable for results. Several federal agencies and private sector stakeholders have research and development efforts under way to develop technologies that are designed to allow safe and routine UAS operations. As we have previously reported, agency officials and industry experts told us that these research and development efforts cannot be completed and validated without safety, reliability, and performance standards, which have not yet been developed because of data limitations. On the federal side, the primary agencies involved with UAS integration are those also working on research and development, namely, FAA, NASA, and DOD. FAA uses multiple mechanisms—such as cooperative research and development agreements (CRDA), federally funded research and development centers (FFRDC), and OTAs (discussed earlier in this statement)—to support its research and development efforts. In support of UAS integration, FAA has signed a number of CRDAs with academic and corporate partners. For example, FAA has CRDAs with CNN and BNSF Railway to test industry-specific applications for news coverage and railroad inspection and maintenance, respectively. Other CRDAs have been signed with groups to provide operational and technical assessments, modeling, demonstrations, and simulations. Another mechanism used by FAA to generate research and development for UAS integration are FFRDCs. For example, MITRE Corporation’s Center for Advanced Aviation System Development is an FFRDC supporting FAA and the UAS integration process. Specifically, MITRE has ongoing research and development supporting air traffic management for UAS detection and avoidance systems, as well as other technologies. FAA has cited many accomplishments in research and development in the past fiscal year, as we were conducting our ongoing work. According to FAA, it has made progress in areas related to detect and avoid technologies supporting ongoing work by RTCA Special Committee Other areas of focus and progress by FAA include command and 228.control, as well as operations and approval. According to FAA, progress for command and control was marked by identifying challenges for UAS operations using ground-to-ground communications. FAA also indicated, during our ongoing work, that it conducted simulations of the effects of UAS operations on air traffic management. Furthermore, in support of research and development efforts in the future, FAA solicited for bids for the development of a Center of Excellence. The Center of Excellence is expected to support academic UAS research and development for many areas including detect and avoid, and command and control technologies. FAA expects to announce the winner during fiscal year 2015. We have previously reported that NASA and DOD have extensive research and development efforts supporting integration into the NAS.NASA has a $150-million project focused on UAS integration into the NAS. NASA officials stated that the current goal of this program is to conduct research that reduces technical barriers associated with UAS integration into the NAS, including conducting simulations and flight testing to test communications requirements and aircraft separation, among other issues. DOD has research and development efforts primarily focused on airspace operations related to detect and avoid systems. However, DOD also contributes to research and development focused on certification, training, and operation of UAS. We reported in 2012 that outside the federal government, several academic and private sector companies are conducting research in support of advancing UAS integration. Research by both groups focuses on various areas such as detect and avoid technologies, sensors, and UAS materials. For example, several private sector companies have developed technologies for visual sensing and radar sensing. Academic institutions have conducted extensive research into the use of various technologies to help the maneuverability of UASs. A number of countries allow commercial UAS operations under some restrictions. A 2014 study, conducted by MITRE for FAA, revealed that Japan, Australia, United Kingdom, and Canada have progressed further than the United States with regulations supporting integration. In fact, Japan, the United Kingdom, and Canada have regulations in place allowing some small UAS operations for commercial purposes. According to this study, these countries’ progress in allowing commercial access in the airspace may be attributed to differences in the complexity of their aviation environment. Our preliminary observations indicate that Japan, Australia, United Kingdom, and Canada also allow more commercial UAS operations than the United States. According to the MITRE study, the types of commercial operations allowed vary by country. For example, as of December 2014, Australia had issued over 180 UAS operating certificates to businesses engaged in aerial surveying, photography, and other lines of business. Furthermore, the agriculture industry in Japan has used UAS to apply fertilizer and pesticide for over 10 years. Several European countries have granted operating licenses to more than 1,000 operators to use UASs for safety inspections of infrastructure, such as rail tracks, or to support the agriculture industry. While UAS commercial operations can occur in other countries, there are restrictions controlling their use. For example, the MITRE study showed that several of the countries it examined require some type of certification and approval to occur before operations. Also, restrictions may require operations to remain within line of sight and below a certain altitude. In Australia, according to the MITRE study, commercial operations can occur only with UASs weighing less than 4.4 pounds. However, the rules governing UASs are not consistent worldwide, and while some countries, such as Canada, are easing restrictions on UAS operations, other countries, such as India, are increasing UAS restrictions. For our ongoing work, we spoke with representatives of the aviation authority in Canada (Transport Canada) to better understand UAS use and recently issued exemptions. In Canada, regulations governing the use of UAS have been in place since 1996. These regulations require that UAS operations apply for and receive a Special Flight Operations Certificate (SFOC). The SFOC process allows Canadian officials to review and approve UAS operations on a case-by-case basis if the risks are managed to an acceptable level. This is similar to the COA process used in the United States. As of September 2014, over 1,000 SFOCs had been approved for UAS operations this year alone. Canada issued new rules for UAS operations on November 27, 2014. Specifically, the new rules create exemptions for commercial use of small UASs weighing 2 kilograms (4.4 pounds) or less and between 2.1 kilograms to 25 kilograms (4.6 pounds to 55 pounds). UASs in these categories can commercially operate without a SFOC but must still follow operational restrictions, such as a height restriction and a requirement to operate within line of sight. Transport Canada officials told us this arrangement allows them to use scarce resources to regulate situations of relatively high risk. For example, if a small UAS is being used for photography in a rural area, this use may fall under the new criteria of not needing an SFOC, thus providing relatively easy access for commercial UAS operations. Finally, our ongoing work has found that FAA interacts with a number of international bodies in an effort to harmonize UAS integration across countries. According to FAA officials, the agency’s most significant contact in Europe has been with the Joint Authorities for Rulemaking for Unmanned Systems (JARUS). JARUS is a group of experts from the National Aviation Authorities (NAAs) and the European Aviation Safety Agency. A key aim of JARUS is to develop recommended certification specifications and operational provisions, which countries can use during the approval process of a UAS. In addition, FAA participated in ICAO’s UAS Study Group, an effort to harmonize standards for UAS. ICAO is the international body that, among other things, promotes harmonization in international standards. ICAO plans to release its UAS manual in March 2015, which will contain guidance about UAS integration for the states. Additional international groups that FAA interacts with in support of UAS integration include the Civil Air Navigation Services Organization, European Organization for Civil Aviation Equipment, and North Atlantic Treaty Organization. Chairman LoBiondo, Ranking Member Larsen, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. Appendix I: Selected Requirements and Status for UAS Integration under the FAA Modernization and Reform Act of 2012, as of December 2014 FAA Modernization and Reform Act of 2012 requirement Status of action Enter into agreements with appropriate government agencies to simplify the process for issuing COA or waivers for public UAS. In process – MOA with DOD signed Sept. 2013; MOA with DOJ signed Mar. 2013; MOA with NASA signed Mar. 2013; MOA with DOI signed Jan. 2014; MOA with DOD’s Director of Test & Evaluation signed Mar. 2014; MOA with NOAA still in draft. Expedite the issuance of COA for public safety entities Establish a program to integrate UAS into the national airspace at six test ranges. This program is to terminate 5 years after date of enactment. Develop an Arctic UAS operation plan and initiate a process to work with relevant federal agencies and national and international communities to designate permanent areas in the Arctic where small unmanned aircraft may operate 24 hours per day for research and commercial purposes. Determine whether certain UAS can fly safely in the national airspace before the completion of the Act’s requirements for a comprehensive plan and rulemaking to safely accelerate the integration of civil UASs into the national airspace or the Act’s requirement for issuance of guidance regarding the operation of public UASs including operating a UAS with a COA or waiver. Develop a comprehensive plan to safely accelerate integration of civil UASs into national airspace. Issue guidance regarding operation of civil UAS to expedite COA process; provide a collaborative process with public agencies to allow an incremental expansion of access into the national airspace as technology matures and the necessary safety analysis and data become available and until standards are completed and technology issues are resolved; facilitate capability of public entities to develop and use test ranges; provide guidance on public entities’ responsibility for operation. Make operational at least one project at a test range. Approve and make publically available a 5-year roadmap for the introduction of civil UAS into national airspace, to be updated annually. Submit to Congress a copy of the comprehensive plan. Publish in the Federal Register the Final Rule on small UAS. In process Publish in the Federal Register a Notice of Proposed Rulemaking to implement recommendations of the comprehensive plan. Publish in the Federal Register an update to the Administration’s policy statement on UAS in Docket No. FAA-2006-25714. Achieve safe integration of civil UAS into the national airspace. In process FAA Modernization and Reform Act of 2012 requirement Status of action Publish in the Federal Register a Final Rule to implement the recommendations of the comprehensive plan. Develop and implement operational and certification requirements for public UAS in national airspace. For further information on this testimony, please contact Gerald L. Dillingham, Ph.D., at (202)512-2834 or [email protected]. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Brandon Haller, Assistant Director; Melissa Bodeau, Daniel Hoy, Eric Hudson, and Bonnie Pignatiello Leer. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | UASs are aircraft that do not carry a pilot aboard, but instead operate on pre-programmed routes or are manually controlled by following commands from pilot-operated ground control stations. The FAA Modernization and Reform Act of 2012 put greater emphasis on the need to integrate UASs into the national airspace by requiring that FAA establish requirements governing them. FAA has developed a three-phased approach in its 5-year Roadmap to facilitate incremental steps toward seamless integration. However, in the absence of regulations, unauthorized UAS operations have, in some instances, compromised safety. This testimony discusses 1) progress toward meeting UAS requirements from the 2012 Act, 2) key efforts underway on research and development, and 3) how other countries have progressed in developing UAS use for commercial purposes. This testimony is based on GAO's prior work and an ongoing study examining issues related to UAS integration into the national airspace system for civil and public UAS operations. The Federal Aviation Administration (FAA) has made progress toward implementing the requirements defined in the FAA Modernization and Reform Act of 2012 (the 2012 Act). As of December 2014, FAA had completed 9 of the 17 requirements in the 2012 Act. However, key requirements, such as the final rule for small unmanned aerial systems (UAS) operations, remain incomplete. FAA officials have indicated that they are hoping to issue a Notice of Proposed Rulemaking soon, with a timeline for issuing the final rule in late 2016 or early 2017. FAA has established the test sites as required in the Act, sites that will provide data on safety and operations to support UAS integration. However, some test site operators are uncertain about what research should be done at the site, and believe incentives are needed for industry to use the test sites. As of December 4, 2014, FAA granted seven commercial exemptions to the filmmaking industry allowing small UAS operations in the airspace. However, over 140 applications for exemptions were waiting to be reviewed for other commercial operations such as electric power line monitoring and precision agriculture. Previously, GAO reported that several federal agencies and private sector stakeholders have research and development efforts under way focusing on technologies to allow safe and routine UAS operations. During GAO's ongoing work, FAA has cited many accomplishments in research and development in the past fiscal year in areas such as detect and avoid, and command and control. Other federal agencies also have extensive research and development efforts supporting safe UAS integration, such as a National Aeronautics and Space Administration (NASA) project to provide research that will reduce technical barriers associated with UAS integration. Academic and private sector companies have researched multiple areas related to UAS integration. GAO's ongoing work found that other countries have progressed with UAS integration and allow limited commercial use. A 2014 MITRE study found that Japan, Australia, the United Kingdom, and Canada have progressed further than the United States with regulations that support commercial UAS operations. For example, as of December 2014, Australia had issued 180 UAS operating certificates to businesses in industries including aerial surveying and photography. In addition, Canada recently issued new regulations exempting commercial operations of small UASs weighing 25 kilograms (55 lbs.) or less from receiving special approval. |
MDA is a unique agency with extraordinary acquisition flexibility and a challenging mission, however while that flexibility has helped it to rapidly field systems, it has also hampered oversight and accountability. Over the years, Congress has created a framework of laws that makes major defense acquisition programs accountable for their planned outcomes and cost, gives decision makers a means to conduct oversight, and ensures some level of independent program review. Application of many of these laws is triggered by the phases of the Department of Defense’s acquisition cycle, such as entry into engineering and manufacturing development. Specifically, major defense acquisition programs are generally required by law and policy to do the following: Document program parameters in an acquisition program baseline that, as implemented by DOD, has been approved by the Milestone Decision Authority, a higher-level DOD official prior to the program’s entry into the engineering and manufacturing development phase. The baseline provides decision makers with the program’s best estimate of the program’s total cost for an increment of work, average unit costs for assets to be delivered, the date that an operational capability will be fielded, and the weapon’s intended performance parameters. Once approved, measure the program against the baseline, which is the program’s initial business case, or obtain the approval of a higher-level acquisition executive before making changes. Obtain an independent life-cycle cost estimate prior to beginning engineering and manufacturing development, and/or production and deployment. Independent life-cycle cost estimates provide confidence that a program is executable within estimated cost. Regularly provide detailed program status information to Congress, including information on cost, in Selected Acquisition Reports. Report certain increases in unit cost measured from the original or current program baseline. Covered major defense acquisition programs and subprograms are required to complete initial operation test and evaluation before proceeding beyond low-rate initial production. After testing is completed, the Director for Operational Test and Evaluation assesses whether the results of the test confirm that the system or components are effective and suitable for combat. When MDA was established in 2002, it was granted exceptional flexibility in setting requirements and managing the acquisition, in order that its BMDS be developed as a single program, using a capabilities-based, spiral upgrade approach to quickly deliver a set of integrated defensive capabilities. This decision deferred application of DOD acquisition policy to BMDS until a mature capability is ready to be handed over to a military service for production and operation. Because the BMDS program has not formally entered the DOD acquisition cycle, application of laws that are designed to facilitate oversight and accountability of DOD acquisition programs and that are triggered by phases of this cycle, such as the engineering and manufacturing development phase, has also effectively been deferred. This gives MDA unique latitude to manage the BMDS and it enabled MDA to begin delivering an initial defensive capability in 2004. However, the flexibility also came at the expense of transparency and accountability. Specifically, a BMDS cost, schedule, and performance baseline does not have to be established or approved by anyone outside MDA. Recent laws have created some baseline-related requirements for parts of the BMDS. In addition, while most major defense acquisition programs are required by statute to obtain an independent verification of cost estimates, MDA has only recently developed cost estimates for selected assets and plans to work with the DOD Office of the Director for Cost Assessment and Program Evaluation to develop independent cost estimates for more MDA elements. Further, assessments of a system’s suitability and effectiveness in combat have only been accomplished, with limitations, for the currently deployed Aegis BMD weapon system. The limited amount of testing completed, which has been primarily developmental in nature, and the lack of verified, validated, and accredited models and simulations prevent the Director of Operational Test and Evaluation from fully assessing the effectiveness, suitability, and survivability of the BMDS in annual assessments. MDA has agreed to conduct an operational flight test in 2012. As we concluded in a prior report, having less transparency and accountability than is normally present in a major weapon program has had consequences. The lack of baselines for the BMDS along with high levels of uncertainty about requirements and program cost estimates effectively set the missile defense program on a path to an undefined destination at an unknown cost. Across the agency, these practices left programs with limited knowledge and few opportunities for crucial management oversight and decision making concerning the agency’s investment and the warfighter’s continuing needs. At the program level, these practices contributed to quality problems affecting targets acquisitions, which in turn, hampered MDA’s ability to conduct tests as planned. MDA has employed at least three strategies to acquire and deploy missile defense systems, which has exacerbated transparency and accountability challenges. From its inception in 2002 through 2007, MDA developed missile defense capability in 2-year increments, known as blocks, each built on preceding blocks intended to enhance the development and capability of the BMDS. However, there was little visibility into baseline costs and schedules associated with the systems that comprised the blocks or how the blocks addressed particular threats. In response to our recommendations, in December 2007, MDA announced a new capabilities-based block structure intended to improve the program’s transparency, accountability, and oversight. Instead of being based on 2-year time periods, the new blocks focused on fielding capabilities that addressed particular threats. Because the new block structure was not aligned to regular time periods, multiple blocks were under way concurrently. This approach included several positive changes, including a DOD commitment to establish total acquisition costs and unit costs for selected block assets, including only those elements or components of elements in a block that would be fielded during the block and abandoning deferrals of work from one block to another. MDA was still transitioning to this new capabilities-based block approach when the Director, MDA terminated it in June 2009. According to MDA, this was done in order to address congressional concerns regarding how to structure MDA’s budget justification materials. This termination marked the third acquisition management strategy for the BMDS in the prior 3 years and effectively reduced transparency and accountability for the agency. The agency then began to manage BMDS as a single integrated program but planned to report on cost, schedule, and performance issues by each element within the program. Changing the acquisition strategy is problematic because each time it is changed, the connection is obscured between the old strategies’ scope and resources and the new strategy’s rearranged scope and resources. This makes it difficult for decision makers to hold MDA accountable for expected outcomes and clouds transparency of the agency’s efforts. We also reported in December 2010 that the adoption of the European Phase Adaptive Approach (PAA) for deploying missile defense assets has limitations in transparency and accountability. Specifically, we reported that DOD made progress in acquisition planning for technology development and systems engineering and testing and partial progress in defining requirements and identifying stakeholders but had not yet developed a European PAA acquisition decision schedule or an overall European PAA investment cost. We found that the limited visibility into the costs and schedule for the European PAA and the lack of some key acquisition management processes reflect the oversight challenges with the acquisition of missile defense capabilities that we have previously reported. We concluded that for the European PAA, the flexibility desired by DOD is not incompatible with appropriate visibility into key aspects of acquisition management. Moreover, as DOD proceeds with the European PAA acquisition activities, it is important for Congress and the President to have assurance that the European PAA policy is working as intended and that acquisition activities are cost-effective. We made recommendations also in January 2011 regarding the development of life-cycle cost estimates and an integrated schedule for the acquisition, infrastructure and personnel activities to help identify European PAA implementation risks. DOD partially concurred with the first recommendation and fully concurred with the second. Congress has taken action to address concerns regarding the acquisition management strategy, accountability, and oversight of MDA. For example, in the National Defense Authorization Act for Fiscal Year 2008, Congress required MDA to establish acquisition cost, schedule, and performance baselines for each system element that has entered the equivalent of the engineering and manufacturing development phase of acquisition or is being produced or acquired for operational fielding. Most recently, the Ike Skelton National Defense Authorization Act for Fiscal Year 2011 requires the Secretary of Defense to ensure that MDA establishes and maintains an acquisition baseline for each program element of the BMDS. Since our first MDA report in 2004, we have made a series of recommendations to improve transparency and accountability, many of which are designed to adapt the key transparency and accountability features already embedded in the DOD acquisition regulation and apply them to MDA. Some of our key recommendations include: Establishing and reporting to Congress costs and unit costs, including development costs in unit costs, including sunk costs in cost estimates, reporting top-level test goals, obtaining independent cost and taking steps to ensure the underlying cost estimates are high quality, reliable, and documented reporting variances. Improving transparency by requesting and using procur instead of research, development, testing and evaluation funds to acquire fielded assets. Strengthening the test program by establishing baselines for each new class of target in development, including sufficient schedule and resource margin, including spare test assets and targets, and strengthening the role of the Director, Operational Test and Evaluation in assessing missile defense progress. Implementing a knowledge-based acquisition strategy consistent with DOD acquisition regulations, and ensure that items are not manufactured for fielding before their performance has been validated through testing. DOD has committed to take action on many of these recommenda While agreeing with our recommendations to enhance baseline reporting, there are differences in MDA’s perspectives on such issues as sunk costs and changes in unit cost. tions. In 2010, MDA made significant progress in implementing some of these recommendations by finalizing a new baseline phase review process in which the agency set detailed baselines for several BMDS elements, or portions of elements, for the first time. Specifically, MDA established resource, schedule, test, operational capacity, technical, and contract baselines for several BMDS components. It reported these to Congress in its June 2010 BMDS Accountability Report. MDA also identified three phases of development where baselines are approved—technology development, product development, and initial production phases—and specified the key knowledge that is needed at each phase. MDA officials stated that they expect that aligning the development efforts with the phases will help to ensure that the appropriate level of knowledge is obtained before the acquisitions move from one phase to the next. In another key step, approval of the product development and initial production baselines will be jointly reviewed by the Director of MDA and the respective service acquisition executive, as a number of missile defense systems are expected to eventually transition to the military services for operation. In addition, in regard to these new phases, the agency established a process for approving baselines. As a result of MDA’s new baseline phase review process, its 2010 BMDS Accountability Report is more comprehensive than its 2009 report. program and that its test and targets program needed to be managed way that fully supported high-priority near-term programs. We reported last year that MDA extensively revised the test plan to address these concerns. MDA’s new approach now bases test scenari os r on modeling and simulation needs and extends the test baseline to cove the Future Years Defense Program which allows for better estimation of art of its new target needs, range requirements, and test assets. Also, as p test plan, MDA scheduled dedicated periods of developmental and operational testing, during which the system configuration will rema fixed to allow the warfighter to carry out training, tactics, techniques, and procedures for developmental and operational evaluation. Additionally, the new test plan is expected to provide sufficient time after test events to conduct a full post-test analysis. As we reported last year, these improvements are important because BMDS performance cannot be assessed until models and simulations are accredited and validated and the test program cannot be executed without meeting its target needs. Our assessment of the schedule baselines determined that we could not compare the asset delivery schedule to the prior year’s baseline because MDA has stopped reporting a comprehensive list of planned asset deliveries. Finally, we found the test baseline to be well documented. However, because it is success oriented, any problems encountered in executin the plan can cause ripple effects throughout remaining test events. Th frequent changes that continue to occur undermine the value of the test baseline as an oversight tool. Ove suc ens to t c consistent lack of disciplined analysis that would provide an understanding of what it would take to field a weapon system before system development begins. We have reported that there is a clear set of prerequisites that must be met by each program’s acquisition strategy t o realize successful outcomes. These prerequisites include establishin g a clear, knowledge-based, executable business case for the product. An executable business case is one that provides demonstrated evidence (1) the identified needs are real and necessary and can best be met with the chosen concept and (2) the chosen concept can be developed and produced within existing resources—including technologies, funding, time, and management capacity. Knowledge-based acquisition principle and business cases combined are necessary to establish realistic cost, schedule and performance baselines. Without documented realistic baselines there is no foundation to accurately measure program progre Our work has shown that when agencies do not follow a knowledge-ba approach to acquisition, high levels of uncertainty about requirements, technologies, and design often exist at the start of development program As a result, cost estimates and related funding needs are often understated. r the past 10 years, we have conducted extensive research on cessful programs and have found that successful defense programs ure that their acquisitions begin with realistic plans and baselines pri he start of development. We have previously reported that ause of poor weapon system outcomes, at the program level, is the s. Aegis Ashore program and the Ground-based Midcourse Defense (GMD) program. Testing and Targets: As in previous years, failures and delays in testing have continued to delay the validation of models and simulations used to assess BMDS performance. Target availability was a significant, though not the only, driver to the test plan delays. Since 2006, we have reported that target availability has delayed and prompted modifications to planned test objective s. This trend continued in 2010. We reported this year that five tests scheduled for fiscal year 2010 were canceled because of a moratorium on air launches of targets. The moratorium was imposed following the failure of an air launched target participating in MDA’s December 2009 Theater High Altitude Area Defense (THAAD) flight test. A failure review board investi identified the rigging of cables to the missile in the aircraft as immediate cause of the failure and shortcomings in inte processes at the contractor as the underlying cause. Additionally, target shortfalls contributed to delays in flight tests, reduced the number of flight tests, and altered flight test objectives. December 2009 THAAD flight test failure. The extended use of undefinitized contract actions has previously been identified by GAO and others as risky to the government. Because co officers normally reimburse contractors for all allowable co incur before definitization, contractors bear less risk and have l incentive to control costs during this period. The government als risks incurring unnecessary costs as requirements may change before the contract is definitized. Aegis Ashore: the ship-based Aegis BMD. It is expected to track and intercept ballistic missiles in their midcourse phase of flight using Standard Missile-3 (SM-3) interceptor variants as they become available. However, while Aegis BMD has demonstrated performance at sea, these demonstrations used the currently fielded 3.6.1 version of Aegis BMD with the SM-3 IA interceptor, not the newer variant of re the Aegis operating system and new interceptor that Aegis Asho will use. Aegis Ashore is dependent on next-generation versions of -3 Aegis systems—Aegis 4.0.1 and Aegis 5.0—as well as the new SM IB interceptor, all of which are currently under development. Moreover, a series of changes are required to further modify these new variants of Aegis BMD for use on land with Aegis Ashore. These modifications include changes to the Vertical Launching System; suppression or disabling of certain features used at se design, integration, and fabrication of a new deckhouse enclosure for the radar, and potential changes to the SM-3 IB interceptor. Changes to those existing Aegis BMD components that will be reused for Aegis Ashore may reduce their maturity in the context of the new Aegis Ashore program, and new features will require Aegis Ashore is MDA’s future land-based variant of testing and assessment to demonstrate their performance. MDA is plans to make production decisions for the first operational Aeg Ashore before conducting both ground and flight tests. We concluded in this year’s report that it is a highly concurrent effor with significant cost, schedule and performance risk. Ground-based Midcourse Defense: GMD is a ground-based d system designed to provide combatant commanders the capability to defend the homeland against a limited attack from intermediate, and intercontinental-range ballistic missiles during the midcourse phase of flight. The GMD consists of a ground-based interceptor—a booster with an Exoatmospheric Kill Vehicle on top—and a fire control system that receives target information from sensors in order to formulate a battle plan. GMD continues to deliver assets before testing has fully determined their capabilities and limitations. The Director, MDA testified on March 31, 2011 that he considers the GMD interceptors essentially prototypes. In the urgency to deploy assets to meet the Presidential directive to field an initial capability by 2004, assets were built and deployed before developmental testing was completed. During the ongoing developmental testing, issues were found that led to a need for retrofits. GMD intercept tests conducted to date have already led to major hardware or software changes to the interceptors—not all of which have been verified through flight testing. In addition, manufacturing of a new variant called the Capability Enhancement II is well underway and more than half of those variants have already been delivered although their capability has not been validated through developmental flight tests. To date, the two f tests utilizing this variant have both failed to intercept the target. According to MDA, as a result of the most recent failure in December 2010, deliveries of this variant have been halted. Again, because of the urgency to deploy some capability, limited work was undertaken on long-term sustainment for the system which is critical to ensure the system remains effective through 2032. In September 2010, MDA finalized the GMD Stockpile Reliability Program Plan, a key step in developing the knowledge needed to determine the sustainment needs of the GMD system. Chairman Nelson, Ranking Member Sessions, and Members of the Subcommittee, this completes my prepared statement. I would be respond to any questions you may have at this time. For questions about this statement, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement include David Best, Assistant Director; LaTonya Miller; Steven Stern; Meredith Allen Kimmett; Letisha Antone; Gwyneth Woolwine; Teague Lyons; Kenneth E. Patton; Robert Swierczek; and Alyssa Weir. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | In order to meet its mission, the Missile Defense Agency (MDA) is developing a highly complex system of systems--land-, sea-, and spacebased sensors, interceptors, and battle management. Since its initiation in 2002, MDA has been given a significant amount of flexibility in executing the development and fielding of the ballistic missile defense system. GAO was asked to testify on its annual review of MDA and on progress made to improve transparency and accountability. This statement is based on our March 2011 report. When MDA was established in 2002, it was granted exceptional flexibility in setting requirements and managing the acquisition, in order to meet a Presidential directive to deliver an initial defensive capability in 2004. However, the flexibility also came at the expense of transparency and accountability. For example, unlike certain other Department of Defense (DOD) major defense acquisition programs, a cost, schedule, and performance baseline does not have to be established or approved outside MDA. In addition, while most major defense acquisition programs are required by statute to obtain an independent verification of cost estimates, MDA has only recently developed cost estimates for selected assets and plans to work with DOD's Office of the Director for Cost Assessment and Program Evaluation to develop independent cost estimates for more MDA elements. Further, assessments of a system's suitability and effectiveness in combat have only been accomplished, with limitations, for the currently deployed Aegis Ballistic Missile Defense weapon system. Since its inception, MDA has employed at least three different strategies to acquire and deploy missile defense systems. Because these changes involved different structures for reporting cost, schedule, and performance data, they have exacerbated transparency and accountability challenges--each time a strategy changes, the connection between the old and new strategy planned scope and resources is obscured. In 2010, MDA made significant progress in addressing previously reported concerns about transparency and accountability. Specifically, MDA : (1) Established resource, schedule, test, operational capacity, technical, and contract baselines for several missile defense systems. It reported these to Congress in its June 2010 BMDS Accountability Report. (2) Identified three phases of development where baselines are approved-- technology development, product development, and initial production phases--and specified the key knowledge that is needed at each phase. (3) Established processes for reviewing baselines and approving product development and initial production jointly with the military services that will ultimately be responsible for those assets. GAO also reported last year that MDA extensively revised the test plan to increase its robustness and ability to inform models and simulations for assessing missile defense performance. While it is clear that progress has been made in terms of implementing new acquisition reviews and reporting detailed baselines, there remain critical gaps in the material reported, particularly the quality of the underlying cost estimates needed to establish baselines. Moreover, GAO still has concerns about realism in test planning and acquisition risks associated with the rapid pace of fielding assets. These risks are particularly evident in MDA's efforts to develop systems to support a new approach for missile defense in Europe as well as the Ground-based Midcourse Defense system. GAO does not make new recommendations in this testimony but emphasizes the importance of implementing past recommendations, including: (1) Establishing and reporting complete, accurate, reliable cost information. (2) Strengthening test planning and resourcing. (3) Following knowledge-based acquisition practices that ensure sufficient knowledge is attained on requirements, technology maturity, design maturity, production maturity and costs before moving programs into more complex and costly phases of development. DOD has committed to take action on many of our recommendations. |
FDA is responsible for ensuring the safety of food and medical products marketed in the United States. FDA has opened overseas offices to assist it in the oversight of products manufactured overseas. While FDA’s overseas offices have only recently opened, other federal agencies have long- standing overseas offices and, in previous work, we have identified strategic and workforce planning as important in managing these offices. FDA is responsible for ensuring that products marketed in the United States meet the same statutory and regulatory requirements, whether they are produced in the United States or a foreign country. FDA also works with representatives of other countries to reduce the burden of regulation, harmonize regulatory requirements, and achieve appropriate reciprocal arrangements. FDA’s responsibilities for overseeing the safety of imported products are divided among its centers and offices. FDA’s six regulatory centers are each responsible for the regulation of specific types of products. In addition, the Office of Regulatory Affairs (ORA) performs fieldwork, such as inspecting foreign establishments and examining products at the U.S. border, on behalf of all the product centers to promote compliance with FDA requirements and the applicable laws. To enhance FDA’s activities in this regard, the centers and ORA also engage with foreign regulators and industry through a variety of activities, such as conducting and attending training workshops. In addition, each center and ORA has staff dedicated to managing these international activities. Responsibility for leading, managing, and coordinating all of FDA’s international activities and its overseas offices lies with the Office of International Programs (OIP), within the Office of the Commissioner. (See fig. 1) FDA, including OIP, has historically had staff based only in the United States. OIP engages with international health and regulatory partners on a variety of issues, including holding bilateral meetings, establishing confidentiality agreements with regulatory counterparts for sharing information on regulated products, and holding meetings to harmonize FDA and international regulatory requirements. FDA developed a proposal to establish the overseas offices in May 2008. The stated mission of the offices is to engage with foreign stakeholders to develop information that FDA officials can use to make better decisions about products manufactured in foreign countries for the U.S. market. FDA stated that establishing relationships with foreign stakeholders and gathering information are important responses to globalization, in part because the agency is not able to inspect all of the foreign establishments that manufacture products for the U.S. market. During this planning, FDA identified several broad categories of activities that would serve as the initial focus for the offices, with the expectation that they would evolve as OIP and the offices gained experience. These activities included (1) establishing relationships with U.S. agencies located overseas and foreign stakeholders, including regulatory counterparts and industry; (2) gathering better information locally on product manufacturing and transport to U.S. ports; (3) improving FDA’s capacity to conduct foreign inspections; and (4) providing assistance to build the capacity of counterpart agencies to better assure the safety of the products manufactured and exported from their countries. As of July 2010, most of the offices had opened overseas, and each of these had posts in multiple locations. (See fig. 2.) However, the Middle East Office staff continued to work in the United States while FDA was in the process of finalizing plans to locate the office overseas. Also, as of July 2010, the Europe Office was planning to open its post in Parma, Italy, by late fall 2010. The first FDA staff member was deployed overseas in November 2008, to the China Office, and most staff arrived overseas in the middle of 2009. FDA budgeted $29.9 million for the overseas offices in fiscal year 2009. For fiscal year 2010, the agency increased its budget for the overseas offices by $1 million, bringing the year’s total to $30.9 million. All staffing and administrative costs associated with the offices are included in these budgeted amounts. In addition to the regions covered by the overseas offices, OIP has domestic offices that cover other regions of the world, such as its Africa and Asia Office. FDA officials said it was important to staff the overseas offices with experienced personnel who could represent the agency and speak on its behalf in foreign countries. As of July 2010, FDA had a total of 42 staff assigned to the overseas offices. Of these, FDA had posted 24 staff overseas; planned to assign 1 person to Parma, Italy by fall 2010; had 3 staff for the Middle East Office, who were still working in the United States; and had 14 locally employed staff in India, China, and Latin America, some of whom have technical expertise, while others focus on administrative issues. Each office has a director in that country or region to whom all staff members report. The offices also all have technical experts responsible for engaging with foreign stakeholders and gathering information on food or medical products. In China and India, FDA also placed investigators, who conduct inspections. Like other overseas staff, the investigators are part of OIP for administrative purposes, but decisions regarding which establishments to inspect are made by ORA, which receives input on this from the centers. FDA officials stated that they did not assign investigators to Latin America because U.S.-based investigators can gain access to establishments in that region more quickly than in China or India. OIP staff in the United States also assist the overseas offices. FDA staff agreed to be posted overseas for an initial 2-year rotation. The Department of Health and Human Services (HHS) requires that its overseas staff commit to rotations of no more than 2 years per tour. However, they have the option to renew up to two times, for a total of 6 years in one country. In addition, HHS requires that staff spend a total of no more than 8 years overseas before returning to the United States for at least 1 year. (See app. II for more information on the development and structure of FDA’s overseas offices.) Strategic planning is utilized by agencies to manage their programs more effectively by clearly establishing goals and objectives and describing how program activities can serve those goals. Strategic planning can help agencies develop strategies to address current and future management challenges. In our prior work, we have identified a variety of leading practices for successful strategic planning. One of these practices is the development of a set of results-oriented performance measures. Agencies use performance measures to help evaluate program performance, demonstrate progress in achieving results, balance competing priorities, and inform decision making. Such results-oriented performance measures should, whenever possible, demonstrate a program’s contributions toward the long-term outcomes, or the results the agency expects a program to achieve. Given FDA’s mission to ensure the safety of food and medical products, we have previously noted that the agency’s long-term outcomes should focus on public health. Agencies can show their interim progress and contributions toward long-term outcomes using short-term and intermediate goals and measures. When long-term outcomes may be influenced by multiple agency programs and external factors, short-term and intermediate measures can also demonstrate a program’s specific contribution to a long-term outcome. In addition, workforce planning is utilized by agencies to align their workforce with current and future program needs and develop long-term strategies for recruiting, training, and retaining staff. Approaches to such planning can vary with each agency’s particular needs and mission, but should share certain principles, such as the identification of skills and competencies to fill critical workforce gaps and the strategies needed to recruit them. Workforce planning, in essence, helps agencies think strategically about how to put the right people in the right jobs at the right time. In a February 2010 report, we noted that FDA was not fully utilizing practices for effective strategic and workforce planning. We stated that most of FDA’s established performance measures were not results- oriented as they did not focus on the actual public health outcomes of FDA’s work. We also stated that FDA’s internal coordination between its centers and offices was one of the major management challenges facing the agency. We recommended, and FDA agreed, that the agency issue an up-to-date strategic workforce plan, make its performance measures more results-oriented, and more clearly align center and office program activities to FDA’s strategic goals. We have also previously reported on the importance of strategic and workforce planning for other federal agencies managing offices overseas. For example, in 2002, the U.S. Customs Service posted officials at foreign ports to screen cargo containers. We noted that long-term program success would require strategic plans that clearly establish the program’s goals and objectives, results-oriented performance measures, and a workforce plan. We have also noted challenges faced by other federal agencies with long-standing offices overseas. In our review of workforce planning by CDC, an agency within HHS that has 270 U.S. and 1,400 locally employed staff overseas, we noted that the agency has faced difficulties hiring and retaining staff posted overseas due, in part, to a lengthy hiring process and limited opportunities for promotion. In addition, we have previously reported on the Department of State’s challenges staffing qualified personnel to hardship locations overseas, which the agency defines as locations where differential pay incentives are provided to compensate staff for the severity or difficulty of the local conditions. FDA’s overseas offices are establishing relationships with foreign stakeholders and U.S. federal agencies located overseas, gathering information to assist regulatory decision making, conducting establishment inspections, and providing capacity building to foreign stakeholders in an effort to help ensure the safety of imported products. Though FDA officials cite specific benefits associated with the overseas offices, overseas officials report facing a variety of challenges that may limit their ability to enhance agency oversight. One of the primary activities for the newly established offices, after their initial set-up, has been to develop relationships with foreign stakeholders and other U.S. federal agencies located overseas. FDA has identified building relationships as a key step towards better understanding foreign regulatory processes, identifying possible collaborative activities, sharing information, and building capacity. FDA officials said that prior to the opening of the overseas offices, the agency had little knowledge of the regulatory structures in some countries with which the overseas offices interact or lacked points of contact with some of their regulatory counterparts. For example, FDA’s overseas officials said that prior to the opening of the overseas offices it took the agency a month to identify their Chinese regulatory counterparts during the melamine crisis. Similarly, prior to the opening of the India Office, FDA had a limited understanding of its regulatory counterparts. For example, it was unclear which Indian regulatory agency is responsible for overseeing food products exported to the United States. FDA’s overseas officials in that office are still working to clarify their regulatory counterparts in certain areas. Both foreign stakeholders and other federal agency officials located overseas report that FDA’s presence overseas is beneficial for relationship building. Some of the foreign stakeholders that we spoke with said that they either had not interacted with FDA prior to the opening of the overseas offices or had only limited contact. Officials from both FDA and its foreign regulatory counterparts told us that having a local FDA presence has enabled them to start building a personal connection and trust that would be hard to develop otherwise. FDA officials said, for example, that being located overseas allows them to attend local conferences and better reach out to industry stakeholders. In comparison, officials from FDA’s Middle East Office and Africa and Asia Office—both FDA offices without overseas locations—said that it is challenging to develop relationships with foreign stakeholders from the United States and on-going, real-time communication is difficult. Some of these officials also told us that they have not been able to develop relationships with foreign stakeholders to the same extent as their colleagues in the overseas offices. FDA overseas officials have also begun collaborating with other federal agencies collocated at overseas embassies, through both formal and informal interactions, such as embassy workgroups on health. Federal agency officials we spoke with said that having FDA located overseas will be important for FDA and helpful for the other federal agencies located overseas. For example, some of these officials said that FDA’s overseas presence allows their agencies to spend less time on issues related to FDA- regulated products. Although FDA’s relationship with foreign stakeholders has grown, FDA’s overseas officials have identified continued challenges to forming these relationships. Officials in some of FDA’s overseas offices told us that relationships with foreign regulators are taking longer to develop than FDA originally anticipated. For example, FDA officials in India said that it has been both difficult and time consuming to schedule meetings with their counterparts because Indian regulators must obtain permission from senior levels of their government before participating in meetings with FDA. In comparison, FDA officials said that memorandums of agreement with Chinese regulatory agencies have greatly facilitated relationship building. Though some foreign stakeholders suggested that such agreements could benefit FDA’s relationship with Indian regulators, FDA officials said that these agreements are time consuming to create and they did not yet know if this type of agreement was needed in India. Additionally, officials located in overseas offices that focus on a geographic region, such as Latin America or the Middle East, said they are challenged by the number of different regulators with whom they must establish relationships. In contrast to the single-country focus of the China and India Offices, the Latin America and Middle East Offices cover 37 and 21 countries, respectively. Also, the regulatory structure of some countries can make relationship building difficult. In China and India, regulations are developed at the national level, but are generally enforced at the local level to varying degrees, according to officials from foreign regulatory agencies. Because of this, officials from other U.S. agencies located in these countries said that FDA will probably have to establish relationships with multiple layers of government officials. FDA’s overseas officials also face pressure to spend time contributing to trade discussions involving U.S. industries and other U.S. federal agencies located abroad. Industry officials said that it would be helpful for the overseas offices to intervene in situations where they believe a misunderstanding of FDA’s regulations by foreign regulators inhibits trade. For example, an industry official cited one instance where a U.S. product was allowed entry into China only after FDA’s China Office provided documentation to the Chinese government showing equivalency between Chinese and U.S. standards. Industry officials with concerns may also contact federal agencies that promote U.S. products, such as the Department of Commerce, which may then solicit FDA to provide technical assistance to their trade discussions. Federal agency officials said that it is beneficial to have FDA’s overseas staff participate in such discussions because FDA is highly regarded by foreign stakeholders due to its scientific and regulatory expertise. FDA’s overseas officials said that they provide technical expertise, rather than advocate for specific companies, during these discussions. Although the overseas officials said they participate as technical advisors in these discussions to a limited extent—as trade promotion is not directly related to FDA’s mission and it may take time away from the offices’ other activities—they generally acknowledge that participating in such activities is a necessary part of collaborating with other federal agencies located in the embassies overseas. FDA’s overseas officials are also gathering firsthand information about regulated products and sharing it with domestic FDA components with the intention that it will help the agency make better decisions about the regulation of imported products. FDA officials report that being located overseas provides the agency with better access to firsthand information about regulated products from local media, other federal agencies, and other sources. For example, an official from the Department of State said that the department routinely provides information on food and medical products to other federal agency officials located in the Beijing embassy, including FDA’s China Office. In contrast, officials in the Middle East Office said that their information-gathering efforts suffer because they are not located overseas. They are limited to reading media available in the United States and do not have easy access to industry or other government agencies located overseas. FDA officials report that the information collected by the overseas offices is something the agency would not have had timely access to prior to the opening of the overseas offices. For example, FDA officials said that the use of melamine in products had been widely known in certain sectors of the Chinese dairy industry prior to the melamine crisis. However, FDA did not learn about its use until after it learned of pets sickened by the ingredient. ORA officials speculate that if the agency had staff stationed locally at the time, they would have known about the information in a timelier manner. FDA officials identified specific cases in which the agency took actions based on information gathered by the overseas offices, although some overseas officials also reported a lack of feedback on the usefulness of this information from the centers and ORA. FDA officials said that much of the information collected by the overseas offices does not necessitate action by the agency, although they said that four import bulletins have been issued based on information gathered from the overseas offices. Specifically, between October 2009 and May 2010, FDA issued import bulletins on garlic powder suspected of heavy metal contamination from any country, food products from China suspected of toxic pesticide contamination, food products from India suspected of using water contaminated with pesticides, and flour products from China suspected of being bleached with limestone. In these cases, FDA officials in the United States reported that they would not have known about this information in such a timely manner without being informed by the overseas offices. However, overseas officials submit information to OIP on a weekly basis, but said they often have not received feedback on whether center and ORA officials find the information they gather to be useful or generally did not know who this information was shared with within the agency. FDA’s overseas officials have also been collecting information on foreign regulatory agencies. Officials in some of the overseas offices—such as the India, Latin America, and Middle East Offices—have begun to develop summaries of foreign regulatory agencies and other documents that analyze key regulatory issues. For example, the India Office is comparing regulations from the United States and India, and also developing summaries and points of contact for Indian regulatory agencies. Officials in this office said that they want to use this type of information to better understand the responsibilities and limitations of their local counterparts. However, OIP officials acknowledged a general lack of coordination with the centers and ORA regarding the development of these types of documents. The overseas officials said that they are conducting work that they consider to be valuable to the centers and ORA but do not yet know if this is the case. FDA officials said that they plan to obtain feedback on these documents from centers and offices once they are complete. FDA’s overseas investigators have conducted inspections of establishments producing products for the U.S. market since arriving in the overseas offices, although most inspections are still conducted by domestic investigators. The overseas investigators we spoke with estimate that they spend between 30 and 80 percent of their time on inspections. From June 16, 2009, the first date on which investigators in the China or India Office conducted an inspection, through June 10, 2010, FDA’s overseas officials—including seven investigators and a technical expert who can perform inspections—conducted a total of 48 inspections in China and India. In comparison, during that same time period, FDA’s domestic investigators conducted a total of 132 inspections in these two countries. There is variation across product areas in the portion of inspections conducted by overseas officials. For example, the overseas investigators conducted all 13 of the inspections of food establishments in these countries, while domestic investigators conducted 120 of the 144 inspections of drug establishments. (See table 1.) FDA officials said the agency does not have a goal for how many inspections it would like the overseas investigators to conduct, but it would like to see overall increases in the number of inspections conducted in both China and India. However, we found that FDA conducted about 8 percent fewer inspections (a decrease from 196 inspections) in China and India during this time period than during the previous 12-month period—June 16, 2008, through June 15, 2009. FDA officials report that having investigators located overseas allows the agency to conduct more timely inspections with greater flexibility. For example, some of these officials indicated that for domestic-based investigators, visa and other delays can result in an inspection being conducted several months after an establishment is notified of FDA’s intent to conduct an inspection. For the investigators located overseas, however, inspections may be conducted within weeks of notifying an establishment. In one instance, FDA officials said an investigator in India conducted an inspection of a drug establishment on short notice that was needed as part of the drug approval process. FDA officials said that establishing relationships with foreign regulatory authorities is also intended to help the agency to schedule inspections more quickly in times of crisis and to more quickly identify information about problematic products. The overseas investigators noted that being local gives them the ability to extend the length of an inspection or reschedule an inspection, which they say is difficult for ORA investigators traveling from the United States to do. Overseas investigators also told us that greater flexibility in scheduling and conducting foreign inspections may improve the quality of inspections conducted overseas. In addition to conducting inspections, FDA’s overseas investigators and other staff have been involved in preliminary investigations that may precede establishment inspections. For example, officials from one center said that the China, India, and Latin America Offices have been utilized to contact establishments that were selected by that center for inspection in order to verify certain information, such as their location. Center officials report that the overseas offices staff have been able to more readily obtain responses from the foreign establishments than domestic-based staff and that this activity has helped them improve the quality of the information they have prior to conducting inspections. Overseas officials have suggested that they could also assist other centers in a similar manner by conducting more of these investigations. Furthermore, some FDA officials, including staff stationed overseas, stated that the overseas offices could be better utilized in inspection planning. Specifically, these officials stated that the overseas offices could contribute to the process of assisting the centers in selecting establishments for inspection and provide assistance to domestic-based investigators traveling abroad. FDA officials said that the agency’s foreign capacity building efforts are in their early stages and the agency is planning to increase these efforts in the future. Overseas officials stated that their local presence makes it easier to arrange training on FDA regulations for foreign stakeholders and respond to their follow-up questions. Other federal agencies, such as the Foreign Agricultural Service, have partnered with the overseas offices to conduct training and invited FDA overseas officials to present at their events. FDA has indicated that many of the regulatory agencies with which the overseas offices interact are in various levels of development. For example, India recently created a new food regulatory agency and is in the process of developing regulations for the oversight of medical devices, according to India Office officials. FDA officials said that being overseas allows the agency to assist the countries in building their regulatory infrastructures. Officials from the India, China, Latin America, and Middle East Offices have engaged in activities related to helping countries develop their regulatory systems. For example, FDA’s overseas officials said they have been able to provide comments to foreign regulatory counterparts on draft regulations and provide information on the U.S. regulatory system that could inform the development of these foreign systems. The overseas offices have also helped to identify and translate FDA’s policies and regulations into foreign languages, such as Spanish and Chinese. Because FDA did not have an inventory of translated materials, the China, Latin America, and Middle East Offices have been working to identify what documents have already been translated and what needs to be completed. FDA officials report that translating agency policies and regulations is valuable and integral to its overseas efforts, but also an expensive and time-consuming process. Some stakeholders also reported that they have begun to translate such documents because they could not wait for FDA to do so. Due to the resources needed to translate documents and the fact that many people—both internal and external to FDA—are eager to translate the agency’s documents, overseas officials suggest that the agency needs to make strategic decisions about which documents it chooses to translate. Foreign stakeholders report that FDA’s efforts related to the translation of FDA’s policies and regulations have been beneficial, though it would also be useful for the agency to conduct training in conjunction with the translated documents. FDA’s overseas officials have been involved in answering queries from foreign stakeholders about FDA’s regulations, and some of these officials anticipate future workload challenges as a result. For example, officials from many of the agency’s overseas offices said they have fielded queries from foreign regulators and industry regarding FDA’s policies and regulations. These queries are then coordinated with the centers to obtain technical expertise, if needed. Foreign stakeholders told us that FDA’s overseas staff are more accessible and approachable than staff located in the United States. Though currently manageable, officials from the Latin America Office said they spend a significant portion of their time responding to such queries and believe that it is likely to become unmanageable as more local stakeholders learn about the office and the office’s duties expand. FDA officials also said that, if pending food safety legislation is enacted, they expect the overseas offices to receive an overwhelming number of requests for information and training on how the food safety law would impact products imported to the United States. Additionally, some center staff have expressed concerns regarding the workload generated by the overseas offices as queries received by the overseas offices are often forwarded to experts within the centers. FDA planning for the overseas offices initially focused on guiding early activities and the agency is now developing a 5-year strategic plan, which it expects to complete by October 2010. FDA has not yet developed a long- term workforce plan to help ensure that it is prepared to address potential recruitment and retention challenges. FDA engaged in strategic planning to guide the initial activities and priorities of the overseas offices. Prior to opening the offices, FDA developed the broad categories of activities for the overseas offices that it considered important for furthering FDA’s mission to ensure the safety of imported products. In the summer and fall of 2009, after the offices opened and at the direction of OIP, each office utilized those categories of activities to develop a plan for fiscal year 2010 to describe its initial activities and short-term goals. The offices tailored the plans to reflect circumstances of the country or region in which they are located. For example, the China Office’s plan included activities related to implementing FDA’s memorandums of agreement with Chinese regulatory agencies. These plans remained in draft form and were not finalized. OIP officials said that these draft plans were primarily intended to help guide the initial activities of each overseas office and that they will lay the groundwork for long-term planning. With the overseas offices now open, FDA has begun to develop a 5-year strategic plan to manage the activities of the offices. Officials said that the draft fiscal year 2010 plans and the initial experiences of the offices are helping to guide the development of a 5-year plan. Officials said that it was necessary for FDA to gain experience overseas before OIP could begin long-term planning. Rather than have each overseas office continue to complete its own strategic plan, activities and goals specific to each office will be incorporated into a single OIP-wide strategic plan. Officials stated that, as of July 2010, they were in the process of developing the 5-year plan and anticipated completing the plan by October 2010. As part of its strategic planning, FDA is in the process of identifying a set of short-term and intermediate performance goals and measures that demonstrate overseas office contributions to long-term outcomes, though agency officials said that doing so will be a challenge. To identify goals and measures, officials said that the agency first needs to gather performance information on overseas office activities and to develop an understanding of how overseas office activities can contribute to intended agency outcomes. As part of an agencywide FDA initiative, OIP is currently tracking information on selected overseas office activities, such as the number of inspections conducted by overseas staff. It is also tracking each office’s progress toward completing a specific project. For example, it is tracking the Middle East Office’s progress planning a conference on food safety in the region. Officials stated that tracking this type of performance information will help the agency identify performance goals and measures that demonstrate how overseas office activities contribute to agency strategic goals. However, officials said that developing performance goals and measures for the overseas offices will be a challenge due to the difficulty in directly attributing contributions to long-term outcomes specifically to the activities of the offices, as they feed into the work of the centers and ORA. In addition, they said that many benefits of the offices, such as improved relationships with regulatory counterparts, will be difficult to quantify. Agency officials said that this challenge is not confined to the overseas offices, as FDA as a whole has encountered challenges identifying goals and measures that capture its performance. According to officials, while OIP intends to include a set of short-term and intermediate goals and measures in the 5-year plan, they said that these will be considered developmental and may change. Officials indicated that the establishment of baselines and targets for those measures will take additional time, and the time line for achieving meaningful targets may extend well beyond 5 years. OIP officials have identified the coordination of overseas office activities with the centers and ORA as a management challenge and OIP is taking steps during strategic planning to align the overseas offices with the rest of the agency. The lack of coordination regarding the oversea offices’ development of documents on foreign product regulation and involvement in inspection planning highlights this challenge. To help coordinate overseas office and domestic FDA activities, OIP hosted a retreat with senior staff from the centers, ORA, and Office of the Commissioner to discuss OIP’s planning for the overseas offices in December 2009. After the retreat, OIP officials held follow-up meetings with center and ORA officials to obtain more information on how the offices can meet the needs of the centers and ORA. OIP officials told us that another retreat to discuss long-term planning is scheduled for October 2010. Furthermore, there are also agencywide international workgroups that provide forums to discuss FDA’s international activities. OIP hosts one such group, in which OIP officials and officials who represent internationally focused programs within the centers and ORA meet monthly to discuss FDA’s international activities, including the activities of overseas offices. Monthly meetings have also been initiated between certain centers and overseas offices. For example, because of the volume of pharmaceuticals manufactured in India, the India Office holds monthly teleconferences with officials from the Center for Drug Evaluation and Research to help coordinate their activities. Given the diversity of agency components involved in overseeing imported products, it is important that the activities of the overseas offices are effectively coordinated with those of the rest of the agency. FDA has not yet developed a long-term workforce plan to ensure that future overseas office staffing needs are met. As of June 2010, FDA workforce planning for the overseas offices had focused on addressing short-term staffing issues to prepare for upcoming vacancies. Given the 2-year rotations required by HHS, the first group of staff who went overseas in 2009 will have the option of returning to the United States in 2011. FDA has established procedures for overseas staff to renew their rotations and a preliminary time line for staffing possible upcoming vacancies. According to FDA’s time line, staff will make their renewal decisions 9 months before the end of their 2-year rotation. The majority of staff arrived overseas in the middle of 2009 and they will therefore make their decisions around fall 2010. While FDA officials told us they expect most staff will renew their rotations, they expect some staff will return home and FDA will need to fill those positions. FDA officials acknowledge the value of a workforce plan, but do not expect to develop one until the offices have been open longer. They said a workforce plan will be more important after the agency assesses turnover from the first 2-year rotation. FDA has already experienced challenges staffing some locations, and recruitment and retention issues associated with FDA’s overseas offices necessitate advance workforce planning. Although FDA officials state that the agency was generally able to recruit staff with the desired level of experience for most overseas office locations, the agency has already encountered staffing difficulties. As of July 2010, it had one such vacancy in Mexico and four in India. Domestic FDA staff have been sent to Mexico on temporary assignments to address a staffing gap in that office and FDA plans to fill staffing gaps in the India Office through temporary assignments of less than 60 days. Staffing the FDA offices in China, India, and Mexico may be particularly challenging as the posts are located in cities classified by the Department of State as hardship posts. Furthermore, FDA officials have expressed interest in expanding the number of overseas offices, such as the addition of new locations in Africa, Brazil, and Canada. While FDA has no finalized plans for this expansion, additional locations would necessitate recruiting additional overseas staff. Advance planning is needed as the staffing process for overseas positions is lengthy due to several factors, such as obtaining necessary security and medical clearances to work overseas. FDA officials estimate that the process for recruiting and posting future overseas staff members will take 9 months. FDA staff and staff from other federal agencies with overseas staff have identified potential challenges associated with staffing overseas offices that could impact recruitment and retention. One potential challenge for FDA is to ensure the effective reintegration of returning staff into domestic positions. Although all returning staff are guaranteed a position at FDA, they are not guaranteed their former position. Some overseas FDA staff with whom we spoke questioned whether their posting overseas will serve as a career enhancing opportunity and expressed uncertainty regarding their ability to obtain a desired position within FDA upon their return. Similarly, CDC staff we talked with stated that CDC has also faced these types of reintegration challenges for its overseas staff and told us that uncertainty about career implications can negatively affect recruitment for overseas positions. FDA officials indicated that they are in the process of establishing a mechanism for returning staff to be selected for appropriate positions. Recruiting overseas office staff with language skills, which has been cited as an advantage to forming relationships, may be a challenge in the future. Although not a requirement for FDA’s overseas staff, all staff members in the Latin America Office and some members of the China, India, and Middle East Offices have local language skills. In the case of the Latin America Office, fluency in Spanish was identified by OIP as desirable, deeming language proficiency important for establishing relationships with government and industry officials in the region. Some China Office staff, along with other federal agency officials located in China, similarly stated that the ability to hold basic conversations in Mandarin is important for establishing diplomatic relationships with Chinese government officials. Although officials told us that government and industry officials in India generally speak English, FDA investigators in India stated that their being able to speak Hindi, or other local languages, can help in conducting inspections. In addition to its professional advantages, FDA and other officials said that language skills can benefit staff morale by improving their overseas living experience. However, maintaining or expanding the portion of staff with language skills would limit the pool of available candidates to staff positions overseas. FDA also faces challenges that could affect recruitment and retention that result from certain HHS policies. For example, FDA and other HHS staff posted overseas do not receive locality pay, though staff at certain locations may receive hardship pay, a cost of living adjustment, and other benefits. Some staff experience an overall decrease in pay when they move overseas. For example, four staff in the Latin America Office experienced an average decrease of about $8,000 due to the loss of locality pay. In addition, FDA staff near retirement age may be especially averse to accept or renew overseas positions because the lack of locality pay can affect retirement compensation and other overseas salary adjustments are not included in retirement calculations. This could particularly pose recruitment problems given FDA’s intent to staff the offices with experienced FDA personnel. In addition, the HHS policy of 2-year staff rotations places a premium on staff retention and staff being trained and prepared when they arrive overseas. FDA and other officials generally estimated that it takes from 6 months to a year for incoming overseas office staff to adapt to, and become effective in, their new positions. Moreover, staff may need time to establish functional working relationships with regulatory counterparts and industry officials. Other federal agencies, such as the Foreign Agricultural Service, have cited benefits from maintaining minimum overseas posting commitments of 3 and 4 years, such as increased staff effectiveness and reduced relocation costs. FDA and CDC officials have each noted that minimum posting commitments in excess of 2 years could negatively affect recruitment for overseas positions and that a 2-year commitment provides the agency with flexibility if an employee has performance problems. However, the effectiveness of the overseas offices could be adversely affected if too many staff leave after their first 2-year rotation. In addition to U.S. nationals, FDA also has to ensure it is able to recruit and retain locally employed staff in its overseas offices. As of July 2010, FDA had two overseas office vacancies for locally employed staff. FDA overseas staff told us that such staff provide valuable contributions toward the activities of the offices. For example, they have helped overseas staff better understand local regulations, connected staff to in-country stakeholders, and, in the China Office, provided translation services. In addition, locally employed staff are not limited in their length of service and can remain in their positions for an extended period of time. Therefore, officials from other federal agencies we talked with cited the important role that such staff play in providing continuity to overseas offices as U.S. national staff return home. However, federal officials also told us that locally employed staff are difficult to retain, often because staff have skills and expertise that are in demand. CDC, which is also an agency within HHS, has long-standing overseas offices and has engaged in workforce planning to address these types of staffing challenges. In 2007, CDC established a strategic workforce plan to help recruit staff for international positions, developed a program that trains staff for international work and temporarily assigns them overseas, and instituted an initiative to help returning staff receive consideration for domestic positions upon their return. CDC reports that the training program has helped with staff interest and preparation for overseas postings. FDA does not have as many staff overseas as agencies such as CDC, nor does it have a history of overseas placements. The small number of staff at each FDA overseas office means that staffing gaps can leave specific mission areas unaddressed. For example, an office may have one staff member dedicated to pharmaceutical products, so a vacancy in that position could create a gap in that specific product area. The opening of FDA’s overseas offices represents a significant change for the agency as it attempts to respond to the needs of globalization. Although it is still early and the impact of the overseas offices on the safety of imported products is not yet clear, overseas FDA staff, domestic FDA staff, and foreign stakeholders have pointed to several immediate benefits. The offices have initially focused their efforts on cultivating relationships with foreign stakeholders, and they plan to continue working to strengthen FDA’s efforts in building foreign capacity and gathering information about regulated products. The offices have also been used to inspect facilities that are exporting food and medical products to the United States, although their impact on the agency’s overall number of foreign inspections may be minimal as most inspections are still conducted by domestic staff. As we previously recommended for the agency overall, strategic planning will be important to ensure the overseas offices are able to effectively execute their mission. The agency’s efforts to begin long-term strategic planning and identify initial goals and measures are a positive first step. The variety of potential activities that the overseas offices could perform and an already mounting workload make it necessary that FDA continue to engage in strategic planning to identify those activities most important to ensuring the safety of imported products. While identifying goals and measures that demonstrate overseas office contributions to long-term outcomes will be a challenge, continuing such planning will be critical for FDA to assess the extent to which the overseas offices are helping to ensure the safety of imported products. Given the variety of other FDA centers and offices that have responsibilities related to imported products, it will also be important that planning efforts ensure the activities of the overseas offices are effectively integrated with the centers and offices. FDA reported that it has generally been successful in hiring qualified staff for the overseas offices, but without a comprehensive workforce plan, the agency has little assurance that it will be equipped to address future staffing challenges. Overseas assignments are new to the agency and staffing gaps overseas could leave specific mission areas, such as food or medical devices, unaddressed. As current staff rotate out of their positions and return to the United States, such a plan could ensure that a well- qualified pool of applicants, who possess diplomatic and, in some cases, language skills, are on hand to replace them. This planning could also make sure that FDA is able to attract and retain a talented pool of locally employed staff who can provide continuity to the operation of the offices. A strategic approach to workforce planning could also help FDA develop a strategy to reintegrate returning overseas staff into the agency’s domestic operations. Such efforts could encourage overseas staff to extend their 2-year commitment and alleviate concerns about what will happen to their careers once they complete their tour of duty at overseas posts. Without a comprehensive, strategic approach to workforce planning, there is little assurance that FDA will be able to place the right people in the right positions at the right time. To help ensure that FDA’s overseas offices are able to fully meet their mission of helping to ensure the safety of imported products, we recommend that the Commissioner of FDA take the following two actions: Ensure, as it completes its strategic planning process for the overseas offices, that it develops a set of performance goals and measures that can be used to demonstrate overseas office contributions to long-term outcomes related to the regulation of imported products and that overseas office activities are coordinated with the centers and ORA. Develop a strategic workforce plan for the overseas offices to help ensure that the agency is able to recruit and retain staff with the experience and skills necessary for the overseas offices and to reintegrate returning overseas staff into FDA’s domestic operations. We provided a draft of this report to HHS for review, and HHS provided written comments, which are reprinted in appendix III. In its comments, HHS noted that FDA concurred with our recommendations and stated that they would help strengthen the agency’s efforts. HHS said that FDA has already begun a long-term strategic planning process. HHS also indicated that, now that FDA’s overseas offices are staffed and functioning, the agency will begin a workforce planning process. In addition, HHS emphasized that FDA’s collaboration with its foreign regulatory counterparts and other stakeholders has become critical to the agency’s ability to fulfill its mission of overseeing the safety of food and medical products. HHS stressed that strong, well-developed, and well-maintained in-country relationships are required to accomplish this mission and pointed out that the establishment of the overseas offices is one major way in which FDA can strengthen its relationships and better coordinate with foreign stakeholders. HHS’s comments also cited FDA’s accomplishments since the overseas offices have opened and highlighted several challenges that the agency faces as it moves forward, including how to best focus overseas office interactions with regulatory counterparts, share information gathered by the overseas offices with the rest of the agency, and manage its overseas investigators. HHS also provided us with technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Commissioner of the Food and Drug Administration, and other interested parties. The report also will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact Marcia Crosse at (202) 512-7114 or [email protected] or Lisa Shames at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Table 2 lists the federal agencies and other stakeholders we met with during site visits to Beijing, Guangzhou, and Shanghai, China; San Jose, Costa Rica; and New Delhi and Mumbai, India. This appendix describes the purpose, planning and development, and locations and staffing of the Food and Drug Administration’s (FDA) overseas offices. The posting of staff in FDA’s overseas locations is a key part of the agency’s strategy for expanding its oversight of imported food and medical products. Products regulated by FDA are manufactured in countries throughout the world, although there is significant variation in the types of products coming from different regions. From fiscal year 1998 to fiscal year 2008, the volume of FDA-regulated imported products more than tripled from less than 5 million import entry lines to more than 17 million import entry lines. These imported products arrive from about 200 countries. According to the U.S. Department of Agriculture’s (USDA) Economic Research Service, the growing presence of imported foods reflects various trends: seasonal demands for produce from warm-weather regions; rising consumer demand for ethnic food, beverages, and spices; integration of nontraditional regions into global supply chains; and falling agricultural trade barriers. Based on USDA data, imported food comprises 15 percent of the U.S. food supply, including 60 percent of fresh fruits and vegetables and 80 percent of seafood. Likewise, the pharmaceutical industry has increasingly relied on global supply chains in which each manufacturing step may be outsourced to foreign establishments. According to FDA, the number of drug products manufactured at foreign establishments has more than doubled since 2002, with China and India accounting for the greatest shares of this growth. FDA has acknowledged that globalization has fundamentally changed the environment for regulating food and medical products and created unique regulatory challenges for the agency. The increasing number of foreign establishments precludes FDA from being in a position to inspect them all to ensure the safety of all imported products. We have previously recommended that FDA conduct more inspections of foreign drug establishments and FDA has agreed that it should do so. However, agency officials have emphasized that a variety of strategies are necessary to ensure the safety of imported products and that conducting inspections is one of multiple approaches it is taking. In establishing the overseas offices, FDA recognized that gathering information to make decisions and building the technical capacity of foreign counterparts, with the goal of improving their regulatory systems, was especially important for ensuring the safety of imported products. Initial planning and development of the overseas offices was led by the Office of the Commissioner, in consultation with internal and external federal stakeholders. An FDA official involved in early planning said that the Office of International Programs (OIP) collaborated with center and ORA officials in various ways, such as through senior leadership meetings and one-on-one meetings. In addition, OIP held regular FDA-wide teleconferences to update staff on the progress and activities of the overseas offices. FDA officials said their planning was aided by advice they sought from certain federal departments and agencies with staff located overseas. For example, FDA obtained advice on budgeting from the Centers for Disease Control and Prevention (CDC) and USDA’s Foreign Agricultural Service. In addition, CDC and the Department of State were both helpful in walking FDA through the process of establishing the overseas offices. FDA initially identified several broad categories of activities in which the overseas offices would engage. FDA officials indicated that these activities would serve as the initial focus for the offices and could be refined as the agency gains experience overseas, although they have not yet changed substantially. These activities included (1) establishing relationships with U.S. agencies located overseas and foreign stakeholders, including regulatory counterparts and industry; (2) gathering better information locally on product manufacturing and transport to U.S. ports; (3) improving FDA’s capacity to conduct foreign inspections; and (4) providing assistance to help build the capacity of counterpart agencies to better assure the safety of the products manufactured and exported from their countries. FDA described how these activities are intended to enhance the decisions the agency makes about imported products: Establishing relationships with foreign stakeholders, including regulatory counterparts and industry, is intended to help them better understand U.S. regulatory requirements and help FDA better understand the regulatory and business practices of other countries, with the ultimate goal of improving the quality of imported products manufactured in these countries. Collaborating with other federal agencies, such as CDC, that are located overseas and have complementary missions allows the agencies to coordinate activities and share information on product quality and safety issues. Routinely gathering information on a wide variety of potential factors, such as weather events and key changes among foreign regulatory counterpart agencies, is intended to help FDA identify potential problems and respond more quickly to developing problems. Improving its capacity to conduct foreign inspections will enable FDA to more rapidly obtain information regarding whether foreign establishments comply with FDA requirements. Providing capacity building to foreign stakeholders and leveraging the resources of other international organizations, is intended to build the technical capacity of foreign counterparts, with the goal of improving the regulatory systems in these countries to ensure the safety of products exported to the United States. FDA’s selection of locations for the overseas offices was influenced by characteristics relevant to product regulation. Most overseas offices are in regions or countries that export a significant percentage of the total volume of products to the United States. The Middle East Office was identified as an area from which the volume of imported products is expected to rise. The goals of the Middle East Office are to increase knowledge about the region by working with FDA’s counterpart agencies and to identify opportunities for capacity building. The offices are also generally located in regions that FDA described as having regulatory systems less mature than FDA’s. The agency indicated that it intended to work with regulators in these regions to help strengthen their capacity. For example, FDA officials indicated that the Indian government is in the process of making significant changes to its food and drug regulatory systems and has specifically requested FDA’s help with the implementation of its new system. FDA also sought to work with local industry in these locations to ensure that products that are manufactured or processed from this region and exported to the United States meet U.S. standards of quality and safety. In contrast, FDA selected Europe to provide an opportunity for the agency to further partner with a mature regulatory system. Therefore, the Europe Office has staff in the U.S. mission to the European Union, in Brussels, Belgium, to engage with the European Commission and staff members embedded within the European Medicines Agency and the European Food Safety Authority so that FDA can further leverage its preexisting relationships with those regulators. Other issues also factored into FDA’s selection of certain locations. For example, FDA also selected China and Latin America because they were the source of recent problematic products, such as contaminated heparin and produce. FDA officials stressed the importance of staffing the overseas offices with experienced personnel who would be equipped to represent the agency and speak on its behalf in the foreign country. The number and type of staff assigned to each office varies, depending on the office priorities and the kinds of products, such as food, drugs, or medical devices, most commonly imported from the country or region. Each office has a director to whom all staff members in that country or region report, and technical experts, who are responsible for engaging with foreign stakeholders and gathering information in the area of food, medical products, or both. FDA placed investigators in Mumbai, India and Guangzhou and Shanghai, China, in part to ensure that investigators could reach establishments more quickly when necessary. According to FDA, having investigators in these countries will also allow the agency to more rapidly inspect manufacturing and processing facilities that are producing goods destined for the United States. FDA elected not to position investigators in Latin America because it determined that U.S.-based investigators are able to more easily travel and gain access to establishments in that region. Overseas office investigators perform inspections and also engage in information gathering and capacity building. The India, China, and Latin America Offices have hired locally employed staff with technical expertise regarding regulated products as well as locally employed staff to perform administrative functions. Table 3 shows the number of FDA staff currently working in overseas locations as of July 2010. Some of the overseas offices were not fully staffed at the time that they opened. Table 4 shows the dates that the FDA staff arrived at each of the overseas locations. As of July 2010, the Latin America Office and the India Office had two vacancies each for technical experts, and the India Office also had two food investigator vacancies. Two of the offices also had vacancies for locally employed staff. In addition, the Middle East Office had one vacancy for a medical product investigator and three vacancies for locally employed staff that the agency plans to fill once the office is located overseas. In addition to the contact name above, Jose Alfredo Gómez, Assistant Director; Geraldine Redican-Bigott, Assistant Director; Kevin Bray; Michael Erhardt; William Hadley; Cathleen Hamann; Rebecca Hendrickson; Julian Klazkin; Deborah Ortega; and Michael Rose made key contributions to this report. High-Risk Series: An Update. GAO-09-271. Washington, D.C.: January 22, 2009. High-Risk Series: An Update. GAO-07-310. Washington, D.C.: January 31, 2007. Drug Safety: FDA Has Conducted More Foreign Inspections and Begun to Improve its Information on Foreign Establishments, but More Progress is Needed. GAO-10-961. Washington, D.C.: September 30, 2010. Food Safety: FDA Could Strengthen Oversight of Imported Food by Improving Enforcement and Seeking Additional Authorities. GAO-10-699T. Washington, D.C.: May 6, 2010. Food and Drug Administration: Opportunities Exist to Better Address Management Challenges. GAO-10-279. Washington, D.C.: February 19, 2010. Food Safety: Agencies Need to Address Gaps in Enforcement and Collaboration to Enhance Safety of Imported Food. GAO-09-873. Washington, D.C.: September 15, 2009. Food and Drug Administration: FDA Faces Challenges Meeting Its Growing Medical Product Responsibilities and Should Develop Complete Estimates of Its Resource Needs. GAO-09-581. Washington, D.C.: June 19, 2009. Seafood Fraud: FDA Program Changes and Better Collaboration among Key Federal Agencies Could Improve Detection and Prevention. GAO-09-258. Washington, D.C.: February 19, 2009. Drug Safety: Better Data Management and More Inspections Are Needed to Strengthen FDA’s Foreign Drug Inspection Program. GAO-08-970. Washington, D.C.: September 22, 2008. Food Safety: Selected Countries’ Systems Can Offer Insights into Ensuring Import Safety and Responding to Foodborne Illness. GAO-08-794. Washington, D.C.: June 10, 2008. Centers for Disease Control and Prevention: Human Capital Planning Has Improved, but Strategic View of Contractor Workforce Is Needed. GAO-08-582. Washington, D.C.: May 28, 2008. Medical Devices: Challenges for FDA in Conducting Manufacturer Inspections. GAO-08-428T. Washington, D.C.: January 29, 2008. Federal Oversight of Food Safety: FDA’s Food Protection Plan Proposes Positive First Steps, but Capacity to Carry Them Out Is Critical. GAO-08-435T. Washington, D.C.: January 29, 2008. Oversight of Food Safety Activities: Federal Agencies Should Pursue Opportunities to Reduce Overlap and Better Leverage Resources. GAO-05-213. Washington, D.C.: March 30, 2005. Food Safety: FDA’s Imported Seafood Safety Program Shows Some Progress, but Further Improvements Are Needed. GAO-04-246. Washington, D.C.: January 30, 2004. Container Security: Expansion of Key Customs Programs Will Require Greater Attention to Critical Success Factors. GAO-03-770. Washington, D.C.: July 25, 2003. Intellectual Property: Enhanced Planning by U.S. Personnel Overseas Could Strengthen Efforts. GAO-09-863. Washington, D.C.: September 30, 2009. Department of State: Additional Steps Needed to Address Continuing Staffing and Experience Gaps at Hardship Posts. GAO-09-874. Washington, D.C.: September 17, 2009. Human Capital: Sustained Attention to Strategic Human Capital Management Needed. GAO-09-632T. Washington, D.C.: April 22, 2009. Centers for Disease Control and Prevention: Human Capital Planning Has Improved, but Strategic View of Contractor Workforce Is Needed. GAO-08-582. Washington, D.C.: May 28, 2008. Results-Oriented Government: GPRA Has Established a Solid Foundation for Achieving Greater Results. GAO-04-38. Washington, D.C.: March 10, 2004. Human Capital: Key Principles for Effective Strategic Workforce Planning. GAO-04-39. Washington, D.C.: December 11, 2003 A Model of Strategic Human Capital Management. GAO-02-373SP. Washington, D.C.: March 15, 2002. Agency Performance Plans: Examples of Practices That Can Improve Usefulness to Decisionmakers. GAO/GGD/AIMD-99-69. Washington, D.C.: February 26, 1999. Managing for Results: Critical Issues for Improving Federal Agencies’ Strategic Plans. GAO/GGD-97-180. Washington, D.C.: September 16, 1997. Executive Guide: Effectively Implementing the Government Performance and Results Act. GAO/GGD-96-118. Washington, D.C.: June 1, 1996. | An increasing volume of food and medical products marketed in the United States are produced in foreign countries. This globalization has challenged the Food and Drug Administration (FDA), which is responsible for ensuring the safety of these products. In late 2008 and early 2009, FDA established overseas offices comprised of 42 total staff covering particular countries or regions--China, Europe, India, Latin America, and the Middle East. The offices are to engage with foreign stakeholders to develop information that FDA officials can use to make better decisions about products manufactured in foreign countries, among other activities. GAO examined (1) the steps overseas offices have taken to help ensure the safety of imported products and (2) the extent to which FDA has engaged in long-term strategic and workforce planning for the overseas offices. GAO reviewed documentation of overseas office activities and planning. GAO also visited offices in China, India, and Latin America to interview FDA officials, officials from other U.S. agencies overseas, and foreign regulators and other stakeholders. FDA's overseas offices have engaged in a variety of activities to help ensure the safety of imported products, but officials report challenges that could limit their effectiveness, due to an increasing workload and other factors. A primary activity for the offices has been establishing relationships with foreign stakeholders (such as foreign regulators and industry) and U.S. agencies overseas. FDA officials and foreign stakeholders said they had limited contact prior to the opening of the offices, and each noted that the overseas offices are beneficial for relationship building, although relationship building can be time consuming. FDA overseas officials have also gathered information about regulated products and shared it with U.S. officials to assist with decision making. Although FDA has used some of this information to take regulatory actions, some FDA overseas officials told us that they lack feedback regarding the utility of much of the information that they submit to the agency. FDA's offices in China and India include investigators who inspect foreign establishments. In these two countries, as of June 2010, the overseas investigators conducted 48 inspections since they were posted overseas. The FDA overseas officials have also started to provide training, responses to queries, and other assistance to foreign stakeholders to help them improve their regulatory systems and better understand FDA regulations. These officials said, however, that an increasing interest in this type of assistance from foreign stakeholders, while important, could lead to an unmanageable workload. Although FDA staff and others have pointed to several immediate benefits of the offices, it is early and their impact on the safety of imported products is not yet clear. FDA is in the process of long-term strategic planning for the overseas offices and has not developed a long-term workforce plan. FDA expects to complete a 5-year strategic plan to manage office activities by October 2010. Officials said that they intend to include performance goals and measures for the offices in the strategic plan, but that it will be difficult to quantify office contributions toward long-term outcomes. Also, coordination of the overseas offices with other parts of FDA has been a challenge, and strategic planning efforts can help ensure this coordination. FDA has not yet developed a long-term workforce plan to help ensure that it is prepared to address potential overseas office staffing challenges. Overseas staff agree to 2-year rotations, and workforce planning has focused on preparing to fill any 2011 vacancies. FDA has experienced challenges staffing some office locations and officials from FDA and other agencies with overseas staff have identified potential recruitment and retention challenges that could affect FDA's mission. They said that recruiting staff with language skills and reintegrating returning staff into domestic operations may be difficult. Certain FDA staff experienced a reduction in their pay when they went overseas. Workforce planning could help FDA prepare for potential staffing challenges. GAO recommends that the Commissioner of FDA take steps to enhance strategic planning to ensure coordination between overseas and domestic activities and develop a workforce plan to help recruit and retain overseas staff. FDA agreed with GAO's recommendations. |
The livestock and poultry industry is vital to our nation’s economy, supplying meat, milk, eggs, and other animal products; however, the past several decades have seen substantial changes in America’s animal production industries. As a result of domestic and export market forces, technological changes, and industry adaptations, food animal production that was integrated with crop production has given way to fewer, larger farms that raise animals in confined situations. These large-scale animal production facilities are generally referred to as animal feeding operations. CAFOs are a subset of animal feeding operations and generally operate on a larger scale. While CAFOs may have improved the efficiency of the animal production industry, their increased size and the large amounts of manure they generate have resulted in concerns about the management of animal waste and the potential impacts this waste can have on environmental quality and public health. Animal manure can be, and frequently is, used beneficially on farms to fertilize crops and to restore nutrients to soil. However, if improperly managed, manure and wastewater from animal feeding operations can adversely impact water quality through surface runoff and erosion, direct discharges to surface water, spills and other dry-weather discharges, and leaching into the soil and groundwater. Excess nutrients in water can result in or contribute to low levels of oxygen in the water and toxic algae blooms, which can be harmful to aquatic life. Improperly managed manure can also result in emissions to the air of particles and gases, such as ammonia, hydrogen sulfide, and volatile organic compounds, which may also result in a number of potentially harmful environmental and human health effects. Most agricultural activities are considered to be nonpoint sources of pollution because the pollution that occurs from these activities is in conjunction with soil erosion caused by water and surface runoff of rainfall or snowmelt from diffuse areas such as farms and rangeland. However, section 502(14) of the Clean Water Act specifically defines point sources of pollution to include CAFOs, which means that under the act, CAFOs that discharge into federally regulated waters are required to obtain a federal permit called a National Pollutant Discharge Elimination System (NPDES) permit. These permits generally allow a point source to discharge specified pollutants into federally regulated waters under specific limits and conditions. These permits are issued by EPA or a state agency authorized by EPA to implement the NPDES program for that state. Currently, 45 states are authorized to administer the NPDES permit program, and their programs must be at least as stringent as the federal program. In 1976, in accordance with the Clean Water Act’s designation of CAFOs as point sources, EPA defined which poultry and livestock facilities constituted a CAFO and established permitting regulations for CAFOs. According to EPA regulations issued in 1976, to be considered a CAFO a facility must first be considered an animal feeding operation. Animal feeding operations are agricultural operations where the following conditions are met: animals are fed or maintained in a confined situation for a total of 45 days or more in any 12-month period, and crops, vegetation, forage growth, or post harvest residues are not sustained during normal growing seasons over any portion of the lot. If an animal feeding operation met EPA’s criteria and either met or exceeded minimum size thresholds based on the type of animals being raised, EPA considered the operation to be a CAFO. For example, an animal feeding operation would be considered a CAFO if it raised 1,000 or more beef cattle, 2,500 pigs weighing more than 55 pounds, or 125,000 chickens. In addition, EPA could designate an animal feeding operation of any size as a CAFO under certain circumstances. For example, if an animal feeding operation was a significant contributor of pollutants to federally regulated water, EPA could designate the operation as a CAFO. Appendix II lists the full text of EPA’s current CAFO definition, including the size thresholds established for small, medium, and large CAFOs. Under EPA’s 1976 CAFO regulations, certain animal feeding operations did not require permits. These included (1) those animal feeding operations that only discharged during a 25-year, 24-hour storm event—which is the amount of rainfall during a 24-hour period that occurs on average once every 25 years or more and (2) chicken operations that use dry manure-handling systems— systems that do not use water to handle their waste. In addition, EPA generally did not regulate animal waste that was applied to cropland or pastureland. In January 2003, we reported that although EPA believed that many animal feeding operations degrade water quality, it had placed little emphasis on its permit program and that exemptions in its regulations allowed as many as 60 percent of the largest operations to avoid obtaining permits. In its response to our 2003 report, EPA acknowledged that the CAFO program was hampered by outdated regulations and incomplete attention by EPA and the states. EPA pointed out that it had revised its permitting regulations for CAFOs to eliminate the exemptions that allowed most animal feeding operations to avoid regulation. The revisions, issued in February 2003 and known as the 2003 CAFO rule, resulted, in part, from the settlement of a 1989 lawsuit by the Natural Resources Defense Council and Public Citizen, in which these groups alleged that EPA had failed to comply with the Clean Water Act. EPA’s 2003 CAFO rule included the following key provisions: Duty to apply. All CAFOs were required to apply for an NPDES permit unless the permitting authority determined that the CAFO had no potential to discharge to federally regulated waters. Expanded CAFO definitions to include all poultry operations and stand- alone operations raising immature animals. The previous rule had applied only to poultry operations that used a liquid manure-handling system. The 2003 rule expanded the CAFO definition to all types of poultry operations, and EPA officials estimated that this revision could result in almost 2,200 additional poultry operations requiring a permit. More stringent design standard for new facilities in the swine, poultry, and veal categories. Under the previous rule, facilities were to be designed, constructed, and operated to contain runoff from a 25-year, 24- hour rainfall event; this continues to be the rule for existing facilities. For new facilities, the 2003 rule established a no-discharge standard that can be met if the facilities are designed, constructed, and operated to contain the runoff from a 100-year, 24-hour storm event. Best management practices. Operations would be required to implement best management practices for applying manure to cropland and for animal production areas. The rule required, among other things, specified setbacks from streams, vegetated buffers, depth markers in lagoons, and other impoundments for production areas to prevent or reduce pollution from the operation. Nutrient management plans. CAFO operations would be required to develop a plan for managing the nutrient content of animal manure as well as the wastewater resulting from CAFO operations, such as water used to flush manure from barns. Compliance schedule. The 2003 rule required newly defined CAFOs to apply for permits by April 2006 and existing CAFOs to develop and implement nutrient management plans by December 31, 2006. According to EPA officials, the 2003 rule was expected to ultimately lead to better water quality because the revised regulations would extend coverage to more animal feeding operations that could potentially discharge and contaminate water bodies and subject these operations to periodic inspections. Three laws provide EPA with certain authorities related to air emissions from animal feeding operations: the Clean Air Act, the Comprehensive Environmental Response, Compensation, and Liability Act of 1980 (CERCLA), and the Emergency Planning and Community Right-to-Know Act of 1986 (EPCRA). Although these laws provide EPA with authority related to air emissions from various sources, they do not expressly identify animal feeding operations as a regulated entity. Specifically: The Clean Air Act authorizes EPA to regulate stationary and mobile sources of air pollution and emphasizes controlling sources that emit more than threshold quantities of regulated pollutants. Livestock producers and other agricultural sources whose emissions meet or exceed specific statutory or regulatory thresholds are therefore subject to Clean Air Act requirements. Although EPA has authorized states and local governments to carry out certain portions of the act, EPA retains concurrent enforcement authority. Taken together, CERCLA and EPCRA require owners or operators of a facility to report to federal or state authorities the release of hazardous substances that meet or exceed their reportable quantities so as to alert federal, state, and local agencies, as well as the public, to the release of these substances. Section 103 of CERCLA requires that the person in charge of a facility notify the National Response Center of any non-permitted release of “hazardous substances” in a reportable quantity as soon as he or she has knowledge of that release. Section 304 of EPCRA requires that the owner or operator of a facility at which a hazardous chemical is produced, used, or stored give immediate notice of a release of any “extremely hazardous substance” to the community emergency coordinator. Among the reportable substances that could be released by livestock facilities are hydrogen sulfide and ammonia. The reportable quantity for each of these hazardous substances is 100 pounds in a 24-hour period. Under these acts, EPA can assess civil penalties for failure to report releases of hazardous substances or extremely hazardous substances that equal or exceed their reportable quantities—up to $32,500 per day or $32,500 per violation for first time offenders. EPA is also working with USDA to address the impacts of animal feeding operations on air and water quality and public health. In 1998, EPA entered into a memorandum of understanding with USDA that calls for the agencies to coordinate on air quality issues relating to agriculture and share information. In addition, in 1999, the two agencies issued a unified national strategy aimed at having the owners and operators of animal feeding operations take actions to minimize water pollution from confinement facilities and land application of manure and in 2001 adopted an agreement to develop a process for working together constructively. To help minimize water pollution from animal feeding operations and meet EPA’s regulatory requirements, USDA, through its Natural Resources Conservation Service, provides financial and technical assistance to CAFO operators in developing and implementing nutrient management plans. Because no federal agency collects accurate and consistent data on the number, size, and location of CAFOs nationwide, it is difficult to determine precise trends in CAFOs over the last 30 years. According to USDA officials, the data USDA collects for large farms raising animals can be used as a proxy for estimating trends in CAFOs nationwide. Using these data, we determined that between 1982 and 2002, the number of large farms raising animals has increased sharply, from about 3,600 to almost 12,000. Moreover, EPA has compiled some data from its regions on the number of CAFOs that have been issued permits; however, these data are inconsistent and inaccurate. As a result, EPA does not have a systematic way of identifying and inspecting all of the CAFOs nationwide that have been issued permits. We found that the number of large farms raising animals for all animal types increased by 234 percent between 1982 and 2002. Table 1 shows the changes in the number of large farms by animal type for 1982 through 2002. As table 1 shows, large broiler and hog farms experienced the largest increase, with large farms raising broilers increasing by 1,187 percent and large farms raising hogs increasing by 508 percent. Large farms raising layers and large farms raising beef cattle remained relatively stable over these 20 years, while layer farms were the only farms that experienced an overall decrease in number over the period, declining by 2 percent. In contrast, while the number of large farms raising animals has increased, the number of all farm raising animals has decreased. Appendix III presents trends in the number of all farms raising animals, from 1982 to 2002. Just as the number of large farms for almost all animal types increased between 1982 and 2002, so did the size of these farms as illustrated by the median number of animals raised on each farm. Table 2 shows the trends in the median number of animals raised on large farms for all animal types from 1982 through 2002. The layer and hog sectors had the largest increases in the median number of animals raised per farm, both growing by 37 percent between 1982 and 2002. Specifically, for layers, large farms increased the number of birds they raised from 131,530 in 1982 to 180,000 in 2002 and for hogs, large farms increased the number of animals they raised from 3,350 in 1982 to 4,588 in 2002. In contrast, large farms that raised either broilers or turkeys only increased slightly in size with an overall increase of 3 and 1 percent, respectively, from 1982 to 2002. The increases in the number of large farms for almost all animal types, as well as the increases in the median number of animals raised on these farms, are also reflected in the percentage of animals raised on large farms as compared with animals raised on all farms. Specifically, the number of animals raised on large farms increased from over 257 million in 1982 to over 890 million in 2002—an increase of 246 percent. In contrast, the number of animals raised on all farms increased from over 1,145 million in 1982 to 2,072 million in 2002—an increase of 81 percent. This is particularly noteworthy because the number of animals raised on large farms only accounted for 22 percent of animals raised on all farms in 1982; yet, the number of animals raised on large farms accounted for 43 percent of animals raised on all farms in 2002. Table 3 shows the trends in the number of animals raised on large farms and the number of animals raised on all farms from 1982 to 2002. As table 3 shows, most of the beef cattle, hogs, and layers raised in the United States in 2002 were raised on large farms. Specifically, 77 percent of beef cattle and 72 percent of both hogs and layers were raised on large farms. EPA does not have its own data collection process to determine the number, size, and location of CAFOs that have been issued permits nationwide. Since 2003, the agency has compiled quarterly estimates from its regions on the number of permits that have been issued to CAFOs. These data are developed by EPA’s regional offices or originates with the state permitting authority. However, we determined that these data are inconsistent and inaccurate and do not provide EPA with the reliable data that it needs to identify and inspect permitted CAFOs nationwide. For example, according to EPA some uncertainty in the data exists because some states may be using general permits to cover more than one operation. In addition, EPA has not established adequate internal controls to ensure that the data are correctly reported. For example, officials from 17 states told us that data reported by EPA for their states were inaccurate. In one case, when we asked a state official for the number of CAFOs in his state, the official realized that the CAFO numbers reported by EPA’s regional office were incorrect because of a clerical error, which resulted in some CAFO statistics for the state being doubled. After the state official discovered this error the state’s data were corrected and resubmitted to EPA. Without a systematic and coordinated process for collecting and maintaining accurate and complete information on the number, size, and location of permitted CAFOs nationwide, EPA does not have the information it needs to effectively regulate these operations. In commenting on a draft of this report, EPA stated that the information from permit files is available to EPA upon request; however, the information is currently not readily compiled in a national database. EPA is currently working with the states to develop and implement a new national data system to collect and record operation-specific information. As part of this effort, the agency plans to develop national requirements for data that should be collected and entered into the database by the states. According to EPA, it may require the states to provide data that identifies operations that have been issued or applied for a CAFO permit as well as operations that should have applied for a permit based on an inspection or enforcement action. The amount of manure a large farm that raises animals can generate primarily depends on the types and numbers of animals raised on that farm, and the amount of manure produced can range from over 2,800 tons to more than 1.6 million tons a year. To further put this in perspective, the amount of manure produced by large farms that raise animals can exceed the amount of waste produced by some large U.S. cities. In addition, multiple large farms that raise animals may be located in a relatively small area, such as two or more adjacent counties, which raises additional concerns about the potential impacts of the manure produced, stored, and disposed of by these farms. Table 4 shows the estimated number of animals and the typical amounts of manure produced each year, by type of animal, for three different sizes of large farms: (1) large farms that meet EPA’s thresholds for each animal type, (2) large farms that raise the median number of animals according to our analysis of USDA farm census data, and (3) large farms that fell into the 75th percentile based on our analysis. As table 4 shows, a dairy farm that meets the minimum threshold of 700 dairy cows could produce almost 17,800 tons of manure a year; a median-sized dairy farm with 1,200 dairy cows could produce about 30,500 tons of manure a year; and a larger dairy farm with 1,900 dairy cows could produce almost 48,300 tons of manure a year. Additionally, individual large farms that raise animals can generate as much waste as certain U.S. cities. For example, a dairy farm meeting EPA’s large CAFO threshold of 700 dairy cows can create about 17,800 tons of manure annually, which is more than the about 16,000 tons of sanitary waste per year generated by the almost 24,000 residents of Lake Tahoe, California. Likewise, a median-sized beef cattle operation with 3,423 head of beef cattle can produce more than 40,000 tons of manure annually, which is more than the almost 38,900 tons of sanitary waste per year generated by the nearly 57,000 residents of Galveston, Texas. Similarly, some larger farms can produce more waste than some large U.S. cities. For example, a large farm with 800,000 hogs could produce over 1.6 million tons of manure per year, which is one and a half times more than the annual sanitary waste produced by the city of Philadelphia, Pennsylvania—about 1 million tons—with a population of almost 1.5 million. Moreover, a beef cattle farm with 140,000 head of cattle could produce over 1.6 million tons of manure annually, more than the almost 1.4 million tons of sanitary waste generated by the more than 2 million residents of Houston, Texas. Although manure is considered a valuable commodity, especially in states with large amounts of farmland, like Iowa, where it is used as fertilizer for field crops, in some parts of the country, large farms that raise animals are clustered in a few contiguous counties. This collocation of large farms that raise animals has resulted in a separation of animal production from crop production because many of these operations purchase feed rather than grow it on adjacent cropland. As a result, there is much less cropland on which the manure can be applied as fertilizer. This clustering of large farms that raise animals has occurred because of structural changes in the farming sector. According to agricultural experts and USDA officials, the overall decrease in the number of farms and increase in the average number of animals raised on a farm may have occurred because these operations wanted to achieve economies of size. To achieve these economies, operators often need significant amounts of capital, which they obtain through production contracts with large processing companies. A USDA report identified this concern as early as 2000 when it found that between 1982 and 1997 as livestock production became more spatially concentrated that when manure was applied to cropland, crops were not fully using the nutrients in manure and this could result in ground and surface water pollution from the excess nutrients. According to the report, the number of counties where farms produced more manure nutrients, primarily nitrogen and phosphorus, than could be applied to the land without accumulating nutrients in the soil increased. Specifically, the numbers of counties with excess manure nitrogen increased by 103 percent, from 36 counties in 1982 to 73 counties in 1997. Similarly, the number of counties with excess manure phosphorous increased by 57 percent, from 102 counties in 1982 to 160 counties in 1997. As a result, the potential for runoff and leaching of these nutrients from the soil was high, and water quality could be impaired, according to USDA. Agricultural experts and government officials who we spoke to during our review echoed the findings of USDA’s report and provided several examples of more recent clustering trends that have resulted in degraded water quality, including the following: As a result of adopting the poultry industry’s approach of developing close ties between producers and processors, North Carolina experienced a rapid growth in the number of hog CAFOs, primarily in five contiguous counties. Based on our analysis of 2002 USDA data, we estimated that the hog population of the five North Carolina counties was more than 7.5 million hogs in 2002 and that hog operations in these counties produced as much as 15.5 million tons of manure that year. Figure 1 shows the geographic concentration of hog farms in North Carolina in 2002. According to North Carolina agricultural experts, excessive manure production has contributed to the contamination of some of the surface and well water in these counties and the surrounding areas. According to these experts, this contamination may have occurred because the hog farms are attempting to dispose of excess manure but have little available cropland that can effectively use it. According to state officials, partly out of concern for the potential contamination of waterways and surface water from manure, in 1997, North Carolina placed a moratorium on new swine farms and open manure lagoons, which was subsequently continued through 2007. While the moratorium included exceptions that could allow a new swine farm to begin operations in this area, according to state officials, the requirements for these exceptions are so stringent that they effectively have prevented the construction of new swine operations or the expansion of existing operations. Similarly, a California water official told us that the geographic clustering of large farms that raise animals is causing concern in his state as well. Our analysis of USDA data shows that in 2002 two counties in the San Joaquin Valley in California had 535,443 dairy cows that produced about 13.6 million tons of manure that year. According to the official, because of the limited flow of water through the Valley, once pollutants reach the water, they do not dissipate, resulting in a long-term accumulation of these pollutants. Regional clustering is also occurring in Arkansas. Two counties in northwest Arkansas, located on the Arkansas-Oklahoma border, raised 14,264,828 broiler chickens that produced over 471,000 tons of manure that year. According to EPA Region 6 officials, the Arkansas-Oklahoma border is an area of concern due to the number of poultry operations (primarily broilers, but also turkeys and layers) within this area. Furthermore, region 6 officials identified numerous water bodies in northwest Arkansas and northeast Oklahoma that have been impaired by manure from animal feeding operations and identified these locations as “areas of general ground water concern.” While USDA officials acknowledge that regional clustering of large animal feeding operations has occurred, they told us that they believe the nutrient management plans that they have helped livestock and poultry producers develop and implement have reduced the likelihood that pollutants from manure are entering ground and surface water. They also believe that as a result of new technologies such as calibrated manure spreaders, improved animal feeds, and systems that convert manure into electricity, large animal feeding operations are able to more effectively use the manure being generated. However, USDA could not provide information on the extent to which these techniques are being utilized or their effectiveness in reducing water pollution from animal waste. Since 2002, at least 68 government-sponsored or peer-reviewed studies have been completed on air and water pollutants from animal feeding operations. Of these 68 studies, 15 have directly linked pollutants from animal waste generated by these operations to specific health or environmental impacts, 7 have found no impacts, and 12 have made indirect linkages between these pollutants and health and environmental impacts. In addition, 34 of the studies have focused on measuring the amount of certain pollutants emitted by animal feeding operations that are known to cause human health or environmental impacts at certain concentrations. Appendix IV presents information, including the sponsor, the pollutants, and impacts, identified for each of the 68 studies we reviewed. Although EPA is aware of the potential impacts of air and water pollutants from animal feeding operations, it lacks data on the number of animal feeding operations and the amount of discharges actually occurring. Without such data, according to EPA officials, the agency is unable to assess the extent to which these pollutants are harming human health and the environment. Of the 15 studies completed since 2002 that we reviewed that directly link pollutants from animal waste to human health or environmental impacts, 8 focused on water pollutants and 7 on air pollutants. Academic experts and industry and EPA officials told us that only a few studies directly link CAFOs with health or environmental impacts because the same pollutants that CAFOs discharge also often come from other sources including smaller livestock operations; row crops using commercial fertilizers; and wastes from humans, municipalities, or wildlife, making it difficult to distinguish the actual source of pollution. Table 5 shows the eight government-sponsored or peer-reviewed studies completed since 2002 that found direct links between water pollutants from animal waste and impacts on human health or the environment. As table 5 shows, EPA sponsored four of the water quality studies that identified reproductive alterations in aquatic species caused by hormones in discharges from animal feeding operations. Two of these studies found that hormones from these discharges caused a significant decline in the fertility of female fish in nearby water bodies. Similarly, three other studies found water bodies impaired by higher nitrogen and phosphorus levels from manure runoff from animal feeding operations. For example, the study by Juniata College found that the runoff resulted in nutrient concentrations in the water that were too high to sustain fish populations. Only one of the eight water pollutant studies linked pollutants from animal feeding operations to human health effects. This study, conducted by Health Canada, directly linked water discharges from a cattle farm to bacteria found in nearby waters. These bacteria, which included Campylobacter and E. coli, caused gastrointestinal illnesses in more than 2,300 residents and 7 deaths in a nearby community. Table 6 shows the seven government-sponsored or peer-reviewed studies completed since 2002 that we reviewed that directly link air pollutants from animal feeding operations with human health effects. As table 6 shows, six of these studies identified airway inflammation or wheezing in people working at or living on an animal feeding operation. For example, the studies conducted by the Department of Veterans Affairs show that the dust of hog confinement facilities induces airway inflammation in workers. The seventh study, completed by Duke University in a laboratory setting, exposed healthy volunteers to air emissions consistent with those that would occur downwind from animal feeding operations. These volunteers reported headaches, eye irritation, and nausea following this exposure. According to experts who we spoke with, the effects of air emissions from animal feeding operations on workers are well known, but the impacts of these emissions on nearby communities are still uncertain, and more research is needed to identify these impacts. Additionally, experts said it is difficult to determine which specific contaminant or mixture of contaminants causes particular health symptoms. For example, while hydrogen sulfide causes respiratory and other health problems, other contaminants emitted from animal feeding operations, such as ammonia, can also cause similar symptoms. We found seven government-sponsored or peer-reviewed studies that have been completed since 2002 that found no impact on human health or the environment from pollutants released by animal feeding operations. These seven studies are shown in table 7. As table 7 shows, the results of a U.S. Geological Survey study did not indicate that poultry animal feeding operations were causing an increase of nutrient concentrations and fecal bacteria in groundwater. Similarly, another study by Agriculture and Agri-Food Canada found that odorants, including ammonia and dust emitted by animal feeding operations, never exceeded the established irritation threshold. According to EPA and academic experts we spoke with, the concentrations of air pollutants and water pollutants emitted by animal feeding operations can vary, which may account for the differences in the findings of these studies. These variations may be the result of numerous factors, including the type of animals being raised, feed being used, and manure management system being employed, as well as the climate and time of day when the emissions occur. We also identified 12 government-sponsored or peer-reviewed studies completed since 2002 that indirectly link pollutants from animal feeding operations to human health or environmental impacts. While these studies found that animal feeding operations were the likely cause of human health or environmental impacts occurring in areas near the operations, they could not conclusively link waste from animal feeding operations to the impacts, often because other sources of pollutants could also be contributing. For example, 5 of these 12 studies found an increased incidence of asthma or respiratory problems in people living or attending school near animal feeding operations, compared with a control group. These studies hypothesized that the pollutants emitted from animal feeding operations were likely the cause of the increased incidence of asthma, but some of these studies acknowledged that pollutants from other sources could also be contributing to the increased incidence. Table 8 lists the 12 studies that have been completed since 2002 that made indirect links between emissions from animal feeding operations and human health and environmental impacts. Thirty-four government-sponsored or peer-reviewed studies completed since 2002 have focused on measuring the amounts of water or air pollutants emitted by animal feeding operations that are known to cause harm to humans or the environment. Specifically: Nineteen of the 34 studies focused on water pollutants. Four studies found increased levels of phosphorus or nitrogen in surface water and groundwater near animal feeding operations. According to EPA, excessive amounts of these nutrients can deplete oxygen in water, which could result in fish deaths, reduced aquatic diversity, and illness in infants. The other 15 studies measured water pollutants such as pathogens, hormones, and antibiotics. Fifteen of the 34 studies focused on measuring air emissions from animal feeding operations. Seven of the 15 studies found high levels of ammonia surrounding animal feeding operations. EPA considers ammonia a hazardous substance that may harm human health or the environment, and that must be reported when emissions exceed its reportable quantity. The other eight studies measured the levels of other air pollutants, such as hydrogen sulfide, particulate matter, and carbon dioxide. Appendix IV provides additional details about each of the 34 studies. While EPA recognizes the potential impacts that water and air pollutants from animal feeding operations can have on human health and the environment, it lacks the data necessary to assess how widespread these impacts are and has limited plans to collect the data it needs. Water quality. EPA has long recognized the impacts of pollution from CAFOs on water quality. For example, almost a decade ago, in its 1998 study on feedlot point sources, EPA documented environmental impacts that may be attributed to these operations. This report identified pollutants from animal feeding operations and listed about 300 spills and runoff events that were attributable to animal feeding operations from 1985 through 1997. More recently when developing the 2003 CAFO rule, EPA documented the potential water quality impacts from CAFOs. It reported that contaminants in manure will have an impact on water quality if significant amounts reach surface water or groundwaters. Moreover, as discussed above, numerous studies completed since 2002 have provided additional information on the direct and indirect impacts of discharges from animal feeding operations on human health and the environment, and many more studies have been completed that have measured the amounts of pollutants being discharged. EPA officials we spoke with acknowledged that the potential human health and environmental impacts of some CAFO water pollutants, such as nitrogen, phosphorus, and pathogens, are well known. They told us that the agency has recently focused its research efforts on obtaining more information on emerging pollutants, such as hormones and antibiotics, and on how the concentrations of nutrients and pathogens differ among the various types of animal feeding operations. However, these officials also stated that EPA does not have data on the number and location of CAFOs nationwide and the amount of discharges from these operations. Without this information and data on how pollutant concentrations vary by type of operation, it is difficult to estimate the actual discharges occurring and to assess the extent to which CAFOs may be contributing to water pollution. According to agency officials, because of a lack of resources, the agency currently has no plans for a national study to collect information on CAFO water discharges. However, the agency has recently taken the following three steps that may help gather additional data on CAFO pollutants that affect water quality: EPA has begun research to determine (1) how the concentration of pathogens and nutrients vary in manure on the basis of certain characteristics, such as animal type and animal feed, and (2) how manure management techniques can reduce the amount of pathogens and nutrients in runoff. EPA has set a long-term research goal, as part of its Multi-Year Plan for Endocrine Disruptors (FY2007-2013), to characterize the magnitude and extent of the impact of hormones released by CAFOs and to determine the impact of management strategies on the fate and effects of hormones. At the time of our review, according to an EPA official, the agency had only limited preliminary findings because it has just recently begun this work. EPA and the U.S. Geological Survey have discussed a joint project to identify (1) the location of CAFOs nationwide and (2) those watersheds where many CAFOs might be located. According to EPA officials, this project is still in the discussion phase. Air quality. More recently, EPA has recognized concerns about the possible health impacts from air emissions produced by animal feeding operations. Prompted in part by public concern, EPA and USDA commissioned a 2003 study by the National Academy of Sciences (NAS) to evaluate the scientific information needed to support the regulation of air emissions from animal feeding operations. The NAS report identified several air pollutants from animal feeding operations and their potential impacts. For example, the study identified ammonia and hydrogen sulfide as two air pollutants emitted from animal feeding operations that can impair human health. According to the study, ammonia can cause eye, nose, and throat irritation at certain concentrations, and hydrogen sulfide can cause respiratory distress. While such effects are known to occur, the study noted that additional research is warranted to determine if air emissions from animal feeding operations are occurring in high enough concentrations to cause these effects. The NAS report also concluded that in order to determine the human health and environmental effects of air emissions from animal feeding operations, EPA and USDA would first need to obtain accurate estimates of emissions and their concentrations from animal feeding operations with varying characteristics, such as animal type, animal feed, manure management techniques, and climate. Since the NAS report was issued, EPA has conducted one hypothetical assessment of the impacts of air emissions from animal feeding operations. In 2004, EPA updated a preliminary analysis to estimate the levels of emissions of ammonia and hydrogen sulfide that occur downwind from a manure lagoon and that could pose a risk to human health. EPA found that ammonia would not reach levels associated with respiratory irritation if emitted at the reportable quantity of 100 pounds per day. On the other hand, the agency found that hydrogen sulfide could cause respiratory irritation and central nervous system effects about one mile downwind if emitted at the reportable quantity of 100 pounds per day. EPA officials who conducted this analysis told us that there have been no documented cases of hydrogen sulfide emissions from animal feeding operations exceeding the reportable quantity. However, other officials noted that the agency does not know exactly what type of species and what size of operations are likely to have emissions above the reportable quantity, and, as noted in the NAS report, accurate measurements of the air pollutants being emitted by animal feeding operations are currently not known. In 2007, a national air emissions monitoring study to collect data on air emissions from animal feeding operations was undertaken as part of a series of consent agreements EPA entered into with individual animal feeding operations. This study, funded by industry and approved by EPA, is intended to help the agency determine how to measure and quantify air emissions from animal feeding operations. The data collected will in turn be used to estimate air emissions from animal feeding operations with varying characteristics, and, according to EPA officials, it is only the first step in a long-term effort to accurately quantify air emissions from animal feeding operations. According to agency officials, until EPA can determine the actual level of emissions occurring, it will be unable to assess the extent to which these emissions are affecting human health and the environment. Progress in conducting the national air emissions monitoring study is discussed in greater detail in the following section. The National Air Emissions Monitoring Study—a 2-year effort to collect data on air emissions from animal feeding operations—is intended to provide a scientific basis for estimating air emissions from these operations. The results of this study were intended to help EPA develop protocols that will allow it to determine which operations do not comply with applicable federal laws. As currently structured, however, the study may not provide the quantity and quality of data needed for developing appropriate methods for estimating emissions. Furthermore, it is uncertain if and when EPA will develop a process-based model that considers the interaction and implications of all sources of emissions at an animal feeding operation. Also, other more recent decisions suggest that the agency has not yet determined how it intends to regulate air emissions from animal feeding operations. In the absence of federal guidance on how to regulate air emissions from animal feeding operations, a few states have developed their own regulations. According to EPA, although it has the authority to require animal feeding operations to monitor their emissions and come into compliance with the Clean Air Act on a case-by-case basis, this approach has proven to be time and labor intensive. As an alternative to the case-by-case approach, in January 2005, EPA offered animal feeding operations an opportunity to sign a voluntary consent agreement and final order, known as the Air Compliance Agreement. To participate in the agreement, animal feeding operations were required to take the following actions: Pay a civil penalty ranging from $200 to $1,000 per animal feeding operation, depending on the number of animals at the operation and the number of operations that each participant signed up. Pay up to $2,500 per farm to help fund a nationwide emissions monitoring study and make their facilities available as a monitoring site for emissions testing. Once emission protocols are published, apply for all applicable air permits and comply with permit conditions, if deemed necessary. Since announcing the Air Compliance Agreement, EPA has proposed exempting such releases from the CERCLA and EPCRA reporting requirements. The exemption, proposed in December 2007, has not been finalized. Any farm more than 10 times larger than EPA’s established size thresholds for CAFOs must, within 120 days of receiving an executed copy of the agreement, provide the National Response Center with a written statement noting the facility’s location, estimating air emissions of ammonia, and stating that it will notify the Center of reportable releases when emission rates are determined by the monitoring study. In return for meeting these requirements, EPA agreed not to sue participating animal feeding operations for certain past violations or violations occurring during the emissions monitoring study. Almost 13,900 animal feeding operations were approved for participation in the agreement, representing the egg, broiler chicken, dairy, and swine industries. Some turkey operations volunteered but were not approved because there were too few operations to fund a monitoring site, and the beef cattle industry chose not to participate. EPA collected a total of $2.8 million in civil penalties from participating animal feeding operations and deposited these funds into the U.S. Treasury. An additional $14.8 million was collected by a nonprofit, industry-established organization to fund the national air emissions monitoring study. Industry groups representing the participating operations provided the funding for the study as was called for under the agreement. Table 9 shows the level of participation by type of operation and the amount of funding provided by different industry groups for the national air emissions monitoring study. The purpose of the National Air Emissions Monitoring Study is to collect data that will provide a scientific basis for measuring and estimating air emissions from animal feeding operations and will help EPA to determine operations’ compliance status. To provide a framework for the monitoring study and develop a sampling plan that was representative of animal feeding operations in the United States, in 2003 EPA convened a panel of industry experts, university and government scientists, and other stakeholders knowledgeable in the field. In 2004, the nonprofit organization founded by the various livestock sectors selected an independent science adviser to oversee the data collection at 20 of the 13,883 animal feeding operations that were selected to participate in the study. Their selection was submitted to and approved by EPA. Data collection began in May 2007. Once 2 years of data has been collected, EPA will use these data to develop air emissions protocols. Figure 6 shows EPA’s expected timeline for the development of air emissions protocols. However, the National Air Emissions Monitoring Study may not provide the data that EPA needs to develop comprehensive protocols for quantifying air emissions from animal feeding operations for a variety of reasons. First, the monitoring study does not include the 16 combinations of animal types and geographic regional pairings recommended by EPA’s expert panel. The panel recommended this approach so that the study sample would be representative of the vast majority of participating animal feeding operations, accounting for differences in climatic conditions, manure-handling methods, and density of operations. However, EPA approved only 12 of the 16 combinations recommended by the expert panel, excluding southeastern broiler, eastern layer, midwestern turkey, and southern dairy operations. Second, site selection for the study has been a concern since the plan to select monitoring sites for the monitoring study was announced in 2005. At that time, many agricultural experts, environmental groups, and industry and state officials disagreed with the site selection methodology. In commenting on EPA’s Federal Register notice of the Animal Feeding Operation Consent Agreement and Final Order, these experts and officials stated that the study did not include a sufficient number of monitoring sites to establish a statistically valid sample. Without such a sample, we believe that EPA will not be able to accurately estimate emissions for all types of operations. More recently, in June 2008, the state of Utah reached an agreement with EPA to separately study animal feeding operations in the state because of the state’s continuing concerns that the National Air Emissions Monitoring Study will not collect information on emissions from operations in Rocky Mountain states and therefore may not be meaningful for those operations that raise animals in arid areas. Finally, agricultural experts have raised concerns that the National Air Emissions Monitoring Study does not include other sources that can contribute significantly to emissions from animal feeding operations. For example, these experts have noted that the monitoring study will not capture data on ammonia emissions from feedlots and manure applied to fields. According to these experts, feedlots and manure on fields, as well as other excluded sources, account for approximately half of the total ammonia emissions from animal feeding operations. Furthermore, USDA’s Agricultural Air Quality Task Force has also recently raised concerns about the quantity and quality of the data being collected during the early phases of the study and how EPA will eventually use the information. In particular, the task force expressed concern that the technologies used to collect emissions data were not functioning reliably. For example, according to data provided by EPA, almost one-third of the preliminary data from one site were incomplete during a 2-month data collection period. The task force was also concerned about EPA’s plans to extrapolate the data across a variety of CAFO operating configurations. At its May 2008 task force meeting, the members requested that the Secretary of Agriculture ask EPA to review the first 6 months of the study’s data to determine if the study needs to be revised in order to yield more useful information. EPA acknowledged that emissions data should be collected for every type of animal feeding operation and practice, but EPA officials stated that such an extensive study is impractical. According to EPA officials, the industry identified those monitoring sites that they believed best represented the type of operations and manure management practices that are in their various animal sectors. EPA reviewed and approved these site selections. According to EPA, it believes that the selected sites provide a reasonable representation of the various animal sectors. EPA has also indicated that it plans to use other relevant information to supplement the study data and has identified some potential additional data sources. For example, a study conducted at two broiler facilities in Kentucky has been accepted as meeting the emissions study’s requirements. However, according to agricultural experts, until EPA identifies all the supplemental data that it plans to use, it is not clear if these data, together with the emissions study data, will enable EPA to develop comprehensive air emissions protocols. Furthermore, EPA has also indicated that completing the National Air Emissions Monitoring Study is only the first step in a multiyear effort to develop a process-based model for predicting overall emissions for animal feeding operations. A process-based model would capture emissions data from all sources and use these data to assess the interaction of all sources and the impact that different manure management techniques have on air emissions for the entire operation. For example, technologies are available to decrease emissions from manure lagoons by, among other things, covering the lagoon to capture the ammonia. However, if an operation spreads the lagoon liquid as fertilizer for crops, ammonia emissions could increase on the field. According to NAS, a process-based model is needed to provide scientifically sound estimates of air emissions from animal feeding operations that can be used to develop management and regulatory programs. Although EPA plans to develop a process-based model after 2011, it has not yet established a timetable for completing this model and, therefore, it is uncertain when EPA will have more sophisticated approaches that will more accurately estimate emissions from animal feeding operations. Two recent decisions by EPA suggest that the agency has not yet determined how it intends to regulate air emissions from animal feeding operations. EPA’s first decision in this context was made in December 2007. At that time EPA proposed to exempt releases to the air of hazardous substances from manure at farms that meet or exceed the reportable quantities from both CERCLA and EPCRA notification requirements. According to EPA, this decision was in response to language that was contained in congressional committee reports related to EPA’s appropriations legislation for 2005 and 2006. EPA was directed to promptly and expeditiously provide clarification on the application of these laws to poultry, livestock, and dairy operations. In addition, the agency received a petition from the National Chicken Council, the National Turkey Federation, and the U.S. Poultry and Egg Association seeking an exemption from the CERLA and EPCRA reporting requirements for ammonia emissions from poultry operations. The petition argued that ammonia emissions from poultry operations pose little or no risk to public health, and emergency response is inappropriate. In proposing the rule, EPA noted that the agency would not respond to releases from animal wastes under CERCLA or EPCRA nor would it expect state and local governments to respond to such releases because the source and nature of these releases are such that emergency response is unnecessary, impractical, and unlikely. It also noted that it had received 26 comment letters from state and local response agencies supporting the exemption for ammonia from poultry operations. However, during the public comment period ending on March 27, 2008, a national association representing state and local emergency responders with EPCRA responsibilities questioned whether EPA had the authority to exempt these operations until the agency had data from its monitoring study to demonstrate actual levels of emissions from animal feeding operations. This national association further commented that EPA should withdraw the proposal because it denied responders and the public the information necessary to protect themselves from dangerous releases. The timing of this proposed exemption, before the National Air Emissions Monitoring Study has been completed, we believe calls into question the basis for EPA’s decision. The second decision that EPA has recently made that calls into question how the agency intends to regulate air emissions from animal feeding operations involves the timing of key regulatory decisions. EPA has stated that it will not make key regulatory decisions on how federal air regulations apply to animal feeding operations until after 2011, when the monitoring study is completed. According to EPA, the agency will issue guidance defining the scope of the term “source” as it relates to animal agriculture and farm activities. As a result, EPA has not decided if it will aggregate the emissions occurring on an animal feeding operation as one source or if the emissions from the barns, lagoons, feed storage, and fields will each be considered as a separate source when determining if an operation has exceeded air emissions’ reportable quantities. Depending on the approach EPA takes, how emissions are calculated could differ significantly. For example, according to preliminary data EPA has received from an egg-laying operation in Indiana, individual chicken barns may exceed the CERCLA reportable quantities for ammonia. Moreover, if emissions from all of the barns on the operation are aggregated, they might be more than 500 times the CERCLA reportable quantities. In addition, EPA does not intend to issue guidance to address emissions, and sources of emissions, that cannot reasonably pass through a stack, chimney, or other functionally equivalent opening, i.e., fugitive emissions, until after the conclusion of the monitoring study. EPA has already been asked to clarify what it considers a source on an animal feeding operation but has declined to do so. In a 2004 ruling on an appeal of a civil suit against a swine operation, the U.S. Court of Appeals for the 10th Circuit overturned a 2002 federal district court ruling that a farm’s individual barns, lagoons, and land application areas could be considered separate “sources” for purposes of CERCLA reporting requirements. The Court of Appeals ruled that the whole farm site was the proper entity to be assessed for purposes of CERCLA reporting. The Court invited EPA to file a friend-of-the-court brief in order to clarify the government’s position on this issue, but EPA declined to do so within the court-specified time frame. Another court reached similar conclusions in 2003. Despite these court rulings, EPA has indicated that it will not decide on what it considers a source until the National Air Emissions Monitoring Study is completed. In the absence of federal guidance on how to regulate air emissions from animal feeding operations, officials in 6 states, out of the 47 states that responded to our survey, are regulating some emissions covered under the Clean Air Act, CERCLA and EPCRA. As table 10 shows, state officials in California, Idaho, Minnesota, Missouri, Nebraska, and North Dakota reported that they have developed state air regulations for certain pollutants that are emitted by CAFOs. Specific examples of the types of regulations that the states have developed include the following: Minnesota has established state emissions thresholds for hydrogen sulfide that apply to CAFOs. CAFO operators in the state must develop an air emissions control plan and must implement it if the Minnesota Pollution Control Agency detects elevated levels of hydrogen sulfide. According to state officials, once an operator reduces emissions, the agency re-monitors to ensure the emission levels remained below the state-established threshold. Minnesota may take legal action against CAFO operators violating this standard. For example, in June 2008, monitoring by the Minnesota Pollution Control Agency at a dairy operation recorded hydrogen sulfide levels above the state threshold and in cooperation with the State Attorney General, the agency, using state authorities, filed a lawsuit against the dairy’s operator. In 2003, California passed a law that authorized the state and local air districts to require animal feeding operations above a certain size to apply for clean air permits and develop a plan to decrease air emissions. For example, one air district in California—the San Joaquin Valley Air Pollution Control District with large clusters of animal feeding operations—developed a rule in 2006 to implement the law that required large animal feeding operations to apply for a permit that includes a plan for mitigating their emissions. According to air district officials, the district has implemented specific regulations for dairy animal feeding operations that require these operations to obtain five separate permits for components of their operations, including barns and land application of manure. The officials told us that these regulations were put in place, in part because the area is designated as a severe nonattainment area under the Clean Air Act and they are required to regulate a broader range of emission sources. According to state officials we spoke with, as a result of these more stringent state regulations, CAFOs in California may be relocating to other states—such as Texas and Iowa. Two federal court decisions have affected EPA and some states’ abilities to regulate CAFOs for water pollutants. The 2005 Waterkeeper Alliance Inc. v. EPA decision forced EPA to revise its 2003 rule for permitting CAFOs and abandon its approach of requiring all CAFO operators to obtain a permit. Although this court decision affected EPA’s ability to regulate CAFOs, states’ reaction to the Waterkeeper decision has varied: some states such as Minnesota continue to require all CAFOs to obtain permits while others such as Colorado have delayed developing new rules until EPA issues its final revised rule. In addition, the Supreme Court’s 2006 decision—Rapanos v. United States—has made determination of Clean Water Act jurisdiction over certain types of waters more complex. According to EPA, this has required the agency to gather significantly more evidence to establish Clean Water Act jurisdiction in some enforcement cases. In its 2005 Waterkeeper decision, the U.S. Court of Appeals for the Second Circuit set aside a key provision of EPA’s 2003 CAFO rule requiring every CAFO to apply for a NPDES permit. Under the 2003 rule, large numbers of previously unregulated CAFOs were required to apply for permits and would have been subject to monitoring and reporting requirements imposed by the permit as well as periodic inspections. According to EPA, the 2003 rule would have expanded the number of CAFOs requiring permits from an estimated 12,500 to an estimated 15,300, an increase of about 22 percent. According to EPA officials, when fully implemented, this requirement for all CAFOs with a potential to discharge to apply for permits would have provided EPA with more comprehensive information on the number and location of CAFOs and how they are operated and managed, thus allowing EPA to more effectively locate and inspect CAFOs nationwide. However, in 2003, both environmental and agricultural groups challenged EPA’s 2003 rule. In the Waterkeeper case, environmental groups argued, among other things, that EPA’s 2003 rule did not adequately provide for (1) public review and comment on a CAFO’s nutrient management plan and (2) permitting authorities to review the CAFO’s nutrient management plan. The court agreed with the environmental groups and instructed EPA to revise the rule accordingly. The agricultural groups challenged the 2003 rule’s CAFO permitting requirement, arguing that the agency exceeded its authority under the Clean Water Act by requiring CAFOs that were not discharging pollutants into federally regulated waters to apply for permits or demonstrate that they had no potential to discharge. The court also agreed with the agricultural groups and set aside the permitting requirements for CAFOs that did not actually discharge. Following the court’s decision, many aspects of the 2003 rule remained in effect, including EPA’s revised regulatory definition of CAFOs and the expansion of the number of CAFOs needing permits by deleting a significant exception. In effect, the Waterkeeper decision returned EPA’s permitting program to one in which CAFO operators are not required to apply for a NPDES permit unless they discharge, or propose discharging, into federally regulated waters. As a result, EPA must identify and prove that an operation has discharged or is discharging pollutants in order to require the operator to apply for a permit. To help identify unpermitted discharges from CAFOs, EPA officials stated that they have to rely on other methods that are not necessarily all-inclusive, such as citizens’ complaints, drive-by observations, aerial flyovers, and state water quality assessments that identify water bodies impaired by pollutants associated with CAFOs. According to EPA officials, these methods have helped the agency identify some CAFOs that may be discharging as well as targeting inspections to such CAFOs. In response to the Waterkeeper decision, EPA proposed a new rule in June 2006 requiring that (1) only CAFO operators that discharge, or propose to discharge, apply for a permit; (2) permitting authorities review CAFO nutrient management plans and incorporate the terms of these plans into the permits; and (3) permitting authorities provide the public with an opportunity to review and comment on the nutrient management plans. According to EPA officials, the final rule is currently being reviewed by the Office of Management and Budget before it is formally published in the Federal Register. These officials said it is uncertain when the OMB review will be completed and the final rule issued. Estimates vary on how this rule, when implemented, will affect the number of CAFOs that will obtain a permit. EPA estimates that 25 percent fewer CAFOs will need to apply for a permit under the new rule than would have been required to apply for a permit under the 2003 rule. In contrast, an association representing state water program officials believes that many fewer CAFOs than EPA estimates will voluntarily apply for a permit under the new 2006 rule, when it is finalized. The need to develop and implement a new rule that meets the Waterkeeper requirements has also resulted in delays in implementing the provisions of the 2003 rule that the Court upheld. Specifically, EPA has not yet implemented, among other things the expanded CAFO definitions, which cover operations such as dry-manure poultry operations. This is particularly significant since, according to a USDA official with extensive knowledge of the poultry industry and another agricultural expert that we spoke to, at least 90 percent of poultry operations use a dry-manure management system. An EPA Region 6 official told us that in Texas alone this expanded definition would result in about 1,500 additional dry-manure poultry operations being covered under the new CAFO definition. Although the Waterkeeper decision has affected EPA’s ability to regulate CAFOs’ water pollutant discharges, this decision has not had the same impact on the ability of some of the states to regulate these operations. According to officials in the 47 states responding to our survey, the impact of the Waterkeeper decision on their ability to regulate water pollution from CAFOs has been mixed. As table 11 shows, the impacts of the Waterkeeper decision ranged from having little impact on state regulation of CAFOs to impairing state CAFO programs. Officials from several of the states that told us that the Waterkeeper decision had little impact on their regulation of CAFOs, saying that this was primarily because their states had implemented CAFO regulations that were more stringent than those required under the Clean Water Act. For example, Minnesota officials stated that the Waterkeeper decision had no impact on their state’s regulations because the state used its own authority to adopt regulations more stringent than EPA’s regulations. Moreover, according to Minnesota officials, even after the Waterkeeper decision, the state has continued to require all CAFOs to obtain permits from the state environmental agency. Similarly, Kansas officials stated that the Waterkeeper decision had only minimal effects because the state has regulated CAFOs since the 1960s. However, 34 states indicated that the Waterkeeper decision directly affected their state programs. Officials from 15 states told us that the number of CAFOs that had obtained permits since the Waterkeeper decision had decreased although none provided us with numbers on what this decrease had been. Similarly, officials in 10 states told us that the Waterkeeper decision had impaired their state’s ability to regulate CAFOs because it discredited the program, created confusion or uncertainty, or made it difficult for them to determine which operations needed a permit. For example, according to the state official responsible for Indiana’s CAFO permitting program, although the state has had a CAFO permitting program since 1971, it adopted EPA’s 2003 CAFO Rule because the rule was more protective. However, when the Waterkeeper decision set aside portions of the 2003 rule, this official told us that the decision, in effect, discredited the state’s regulatory program. In addition, officials from nine states who are responsible for their state’s permitting program told us that their programs remain in limbo while they wait for EPA to issue its final revised rule. These state officials, including officials in Colorado, said that they will update their state rules once EPA’s final rule is issued. Finally, state water pollution control officials have expressed some concerns that EPA’s new 2006 rule will place a greater administrative burden on states than the 2003 rule would have. In an August 2006 letter to EPA, the Association of State and Interstate Water Pollution Control Administrators noted that the “reactive” enforcement that EPA will now follow will require permitting authorities to significantly increase their enforcement efforts to achieve the level of environmental benefit that would have been provided by the 2003 rule. These officials believe that requiring EPA and the states to identify CAFOs that actually discharge pollutants into federally regulated water bodies will consume more resources than requiring all CAFOs to apply for a permit. The Supreme Court’s 2006 Rapanos decision has also affected EPA’s enforcement of the Clean Water Act because the agency believes that it must gather significantly more evidence to establish which waters are subject to the act’s permitting requirements. At issue in the Rapanos decision was whether the Clean Water Act’s wetlands permitting program applied to four specific wetlands that were adjacent to non-navigable tributaries of traditional navigable waters. The Court rejected the standards applied by the lower courts in determining whether wetlands at issue fell under the act’s jurisdiction and, therefore, could be subject to permitting requirements. Although a majority of the justices rejected the standards applied by the lower courts, a majority could not agree on how to determine which waters would fall under the act’s jurisdiction, and thus how far EPA could reach to regulate discharges of pollutants under the act. Although the Rapanos case arose in the context of a different permit program, the scope of EPA’s pollutant discharge permit program originates in the same Clean Water Act definition that was discussed in the decision. According to EPA enforcement officials, the agency may now be less likely to seek enforcement against a CAFO that it believes is discharging pollutants into a water body because it may be more difficult to prove that the water body is federally regulated. According to EPA officials, as a result of the Rapanos decision, the agency must now spend more resources developing an enforcement case because the agency must gather proof that the CAFO not only has illegally discharged pollutants, but that those discharges ultimately entered a federally regulated water body. These officials told us that the farther a CAFO is from a regulated water body, the more evidence they will need to prove that the discharges entered that water body. To ensure “nationwide consistency, reliability, and predictability in their administration of the statute,” EPA has issued national guidance to clarify the agency’s responsibilities in light of the Supreme Court’s decision. However, in a March 4, 2008, memorandum, EPA’s Assistant Administrator for Enforcement and Compliance Assurance stated that the Rapanos decision and EPA’s guidance has resulted in significant adverse impacts to the clean water enforcement program. According to the memorandum, the Rapanos decision and guidance negatively affected approximately 500 enforcement cases, including as many as 187 cases involving NPDES permits. In May 2007, Members of Congress, in both the House and Senate, introduced a bill entitled the Clean Water Restoration Act of 2007 to clearly define the scope of the Clean Water Act. As of August 2008, neither bill had been reported out of committee. For more than 30 years, EPA has regulated CAFOs under the Clean Water Act and during this time it has amassed a significant body of knowledge about the pollutants discharged by animal feeding operations and the potential impacts of these pollutants on human health and the environment. Despite its long-term regulation of CAFOs, EPA still lacks comprehensive and reliable data on the number, location, and size of the operations that have been issued permits and the amounts of discharges they release. As a result, EPA has neither the information it needs to assess the extent to which CAFOs may be contributing to water pollution, nor the information it needs to ensure compliance with the Clean Water Act. More recently, EPA has also begun to address concerns about air pollutants that are emitted by animal feeding operations. The Nationwide Air Emissions Monitoring Study, along with EPA’s plans to develop air emissions estimating protocols, are important steps in providing much needed information on the amount of air pollutants emitted from animal feeding operations. However, questions about the sufficiency of the sites selected for the air emissions study and the quantity and quality of the data being collected could undermine EPA’s efforts to develop air emissions protocols by 2011 as planned. Finally, while the study and resulting protocols are important first steps, a process-based model that more accurately predicts the total air emissions from an animal feeding operation is still needed. While EPA has indicated it intends to develop such a model, it has not yet established a strategy and timeline for this activity. In order to more effectively monitor and regulate CAFOs, we recommend that the Administrator of the Environmental Protection Agency should complete the agency’s effort to develop a national inventory of permitted CAFOs and incorporate appropriate internal controls to ensure the quality of the data. In order to more effectively determine the extent of air emissions from animal feeding operations, the Administrator of the Environmental Protection Agency should reassess the current data collection efforts, including its internal controls, to ensure that the National Air Emissions Monitoring Study will provide the scientific and statistically valid data that EPA needs for developing its air emissions protocols; provide stakeholders with information on the additional data that it plans to use to supplement the National Air Emissions Monitoring Study; and establish a strategy and timetable for developing a process-based model that will provide more sophisticated air emissions estimating methodologies for animal feeding operations. We provided a draft of this report for review and comment to the EPA and the Secretary of USDA. We received written comments from EPA. USDA did not provide written comments, but did provide technical comments and clarifications, which we incorporated, as appropriate. EPA partially concurred with our conclusions and recommendations. In its written comments, EPA acknowledged that currently no national inventory of permitted CAFOs exists. The agency stated that it is currently working with its regions and the states to develop and implement a new national data system to collect and record facility-specific information on permitted CAFOs. We have revised our recommendation to reflect the actions that EPA has underway. In response to our recommendations that EPA reassess the current data collection effort, EPA stated that the agency has developed a quality assurance plan for the study and is continuously evaluating the National Air Emissions Monitoring Study. We are aware that EPA has developed a quality assurance plan for the data collected during the study. However, our recommendation also reflects other concerns with the study. For example, the monitoring sites selected may not represent a statistically valid sample or animal feeding operations that account for the differences in climatic conditions, manure-handling methods, and density of operations; and the study does not address other sources that can contribute significantly to emissions from animal feeding operations. EPA did not address these issues in its comments. Therefore, we continue to believe that EPA should reassess the ongoing effort to ensure that the study, as currently structured, will provide the data that EPA needs. In response to our recommendation that the agency identify the information that it plans to use to supplement the National Air Emissions Monitoring Study, EPA stated that it cannot yet identify the data that it will use to augment the data collected during the monitoring study. However, the agency indicated that it has begun discussions with USDA to identify ongoing research that is focused on agricultural air emissions and gaps that may still exist, but did not provide any additional information on when it plans to identify the supplemental data that it plans to use to augment the monitoring study. Until it does so, neither EPA nor stakeholders can be assured that these data, in combination with the emissions study data, will enable EPA to develop the planned protocols. The agency also agreed with our recommendation to establish a strategy and timetable for developing a process-based model and said that it has begun to evaluate what is needed to develop such a model. However, the agency did not provide any information on when it expects to complete plans for developing a process-based model. EPA also provided technical comments, which we have incorporated, as appropriate. EPA’s written comments are provided in appendix V. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to interested congressional committees, the Administrator of the Environmental Protection Agency, the Secretary of the United States Department of Agriculture and other interested parties. We also will make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VI. For this report we were asked to determine the (1) trends in concentrated animal feeding operations (CAFOs) over the past 30 years; (2) amount of waste they generate; (3) findings of recent key academic, industry, and government research of the potential impacts of CAFOs on human health and the environment, and the extent to which the Environmental Protection Agency (EPA) has assessed the nature and severity of these identified impacts; (4) progress that EPA and states have made in regulating and controlling the air emissions of, and in developing protocols to measure, air pollutants from CAFOs that could affect air quality; and (5) extent to which recent court decisions have affected EPA and the states’ ability to regulate CAFO discharges that impair water quality. In conducting our work, we reviewed laws and regulations and federal and state agencies’ documents. We met with officials from EPA, the U.S. Department of Agriculture (USDA), the National Pork Producers Council, the National Pork Board, the National Cattlemen’s Beef Association, the Environmental Integrity Project (a nonpartisan, nonprofit environmental advocacy group), the Sierra Club, California Association of Irritated Residents, Waterkeeper Alliance, Iowa Citizens for Community Improvement, Environmental Defense, National Association of Clean Air Agencies, Association of State and Interstate Water Pollution Control Administrators, as well as state officials. The National Chicken Council did not respond to our requests for information. Additionally, we visited CAFOs in eight states: Arkansas, California, Colorado, Iowa, Maryland, Minnesota, North Carolina, and Texas. We chose these states because they were geographically dispersed and contained numerous CAFOs representing multiple types of animals. For our analysis of trends in CAFOs over the past 30 years, we used USDA’s Census of Agriculture data. We assessed the reliability of these data by reviewing USDA’s documentation on the development, administration, and data quality program for the Census of Agriculture. We also electronically tested the data used in this study to determine if there were any missing data or anomalies in the dataset. Furthermore, we compared the results of our nationwide results for each year by animal sector to USDA’s published reports. On the basis of these assessments, we determined the data to be sufficiently reliable for the purposes for which it was used in this report. In addition, respecting USDA’s requirement to protect the privacy of individual farmers responding to the Census of Agriculture surveys, we conducted these analyses at USDA and worked with USDA to review our results and verify that no single operation could be identified from our analysis. From USDA’s Census of Agriculture data, we analyzed the most recent data available for large farms raising animals from 1974 through 2002. We used these data on large farms as a proxy for CAFOs because no federal agency collects consistent data on these types of operations. USDA has periodically collected data on farms nationwide using the Census of Agriculture survey. Prior to 1982, these surveys were conducted every four years; whereas since 1982, the agency has administered the survey every five years (the most recent survey results, conducted in 2007, will not be available until February 2009). In analyzing Census data prior to 1982, we found that the categories reported by USDA were not consistent with EPA’s minimum size threshold for large CAFOs: 2,500 hogs, 700 dairy or milk cows, 55,000 turkeys, 1,000 beef cattle, 82,000 layers, and 125,000 broilers. For instance, the largest farm categories USDA reported for broilers prior to 1982 was farms with sales of 100,000 and more. Since sales data must be converted to an inventory number, we had to make adjustments for production cycles to determine the number of animals on a farm per day. Broiler farms complete six production cycles per year therefore, when we divided the USDA provided number of 100,000 in broiler sales by 6 to account for the total number of possible production cycles, the USDA reported broiler sales represent a farm with an inventory of about 17,000 broilers. Farms of this size are much smaller than the 125,000 broiler CAFO threshold defined by EPA. Similarly, categories for farms raising other types of animals, in the pre-1982 USDA data, were also different than the EPA CAFO definitions for these types of operations. As a result, we used the time frame of 1982 through 2002 because USDA could provide us with detailed electronic data that allowed us to apply EPA’s CAFO thresholds to determine the trends in the overall number of large farms that raised animals and could be potentially considered a CAFO. For broilers and layers/pullets, we used EPA’s CAFO minimum size threshold for dry-litter manure handling systems because these systems represent the majority of poultry operations. These thresholds are larger than for those poultry operations that have liquid manure handling systems. Because USDA does not report the average number of animals on a farm, we used USDA Census of Agriculture inventory, sales, and inventory plus sales data for this purpose. The choice of using inventory only, sales only, or inventory and sales data for a particular animal type depended on the wording of Census survey questions during the years we analyzed. When only sales data or inventory plus sales data were used, we adjusted these data using the appropriate USDA formulas to determine the average number of animals on a farm. When both inventory and sales were used for an animal type, we applied an approved USDA approach to determine the average number of animals on a farm. As a result, we made the following adjustments for each animal type: For beef cattle, USDA only collected sales data for 1982 through 1997. As a result, for beef cattle, we used sales of cattle on feed (2002 survey) or sales of fattened cattle (1982 through 1997 surveys) adjusted for the number of production cycles. This increased the likelihood that we were including cattle raised on CAFOs instead of operations that allow the cattle to graze on pastureland. For dairy cows, we used the inventory of animals as of December 31 for each Census year since these animals are maintained to produce milk and not specifically for slaughter. For dairy cows, we included the categories: lactating and nonlactating cows. For hogs, the Census of Agriculture reported both inventory and sales data for hogs and pigs. These data were not reported by either the weight or age, so we used the total for all hogs and pigs of all ages. We used both the inventory and sales data for hogs and adjusted for the number of production or finish cycles. Hogs may be sold more than once because of the practice of selling feeder pigs at about 10-12 weeks of age to producers to be grown to typical slaughter size. For example, in 1997, about 25 percent of all hog and pig sales reported on the Census of Agriculture were feeder pigs. We adjusted the hog data to factor out these multiple sales. For layers, we used survey responses of inventory as of December 31 for layers 20 weeks old and older plus pullets for laying flock replacement. For broilers, we used inventory and sales data from the categories: broilers, fryers, capons, roaster and other chickens raised for meat. For turkeys, both inventory and sales data were used and included both hens and tom turkeys. We also reviewed EPA’s data on the number of CAFOs that had been issued permits—these data are either collected by EPA’s regional offices or from the states—for the period 2003 to 2008. We assessed the accuracy and reliability of these data by interviewing officials in 47 states and we asked them to verify the information that EPA had for the numbers of CAFOs permitted in their state. Based on the information we obtained from the state officials, we determined that EPA’s data for permitted CAFOs was not reliable and could not be used to identify trends in permitted CAFOs over the 5-year period. To identify the amount of manure, including urine, a large CAFO is estimated to generate for each animal type, we used EPA’s thresholds for the minimum number of animals that constitute a CAFO. To illustrate the size of a “typical” large farm for each animal type, we used the median for a large-sized farm. We used the median instead of the mean because we believe it provides a more representative measure for a typical large farm. We also present information on farms at the 75th percentile of all large farms for a particular animal type to represent larger farms. To estimate the amount of manure produced by each type of animal, we used engineering standards for manure production cited by the American Society of Agricultural and Biological Engineers (ASABE). These standards report the total amount of manure over the production cycle for hogs, beef cattle, turkeys, and broilers. In order to estimate the average pounds of manure per day, we divided the total manure produced over the production cycle by the number of days in the production cycle. Further, we converted the pounds of manure into tons of manure per farm per year. We adjusted the manure calculations for the following animal types: For layers, the standards provided the average daily pounds of manure produced by layers. We multiplied the average pounds of manure per day times the average number of animals times 365 days to get manure produced per year. For broilers, we determined the average daily pound of manure from the information provided in the standards. We multiplied the average pound of manure per day times the average number of animals times 365 days to get manure per year. For dairy cows, the standards provided the average daily pounds of manure produced by dairy cows. We multiplied the average pounds of manure per day times the average number of animals times 365 days to get manure per year. However, we adjusted the data to take into account the typical percentage of cows that are either lactating or dry (nonlactating) and applied the different amounts of manure produced by each type of dairy cow. For turkeys, we adjusted the turkey statistics based on the ratio of hens to tom turkeys raised on farms and applied different amounts of manure due to the different sizes of the animals. For hogs, the manure standards report manure produced by hogs covering a specific stage of production: feeder-pig-to-finish pigs—beginning with a pig weighing on average about 27 pounds and resulting in a hog weighing 154 pounds. Estimates for other hog operation types such as nursery, farrow to feeder, and farrow to finish would therefore differ. Census of Agriculture data for 2002 indicate that about a third of all hogs sold were from the grow-to-finish (called finish only on the survey) operation type. The ASABE manure standards for this type of operation use 154 pounds as the finish weight. However, USDA reports that typical hog finish (slaughter) weights at the time of the 2002 Census were about 260 pounds. For hogs only, we adjusted the ASABE manure estimates by 1.7 to account for the larger finish weights reported by USDA. We believe this is a conservative adjustment because manure produced by hogs weighing 154 to 260 pounds will be the maximum amount per day that ASABE used to calculate the average pounds produced for the hogs growing from about 27 pounds to 154 pounds. For beef cattle, we used the manure standard for “beef-finishing cattle.” This standard is for cattle fattened from about 740 pounds to about 1,200 pounds at marketing. Beef cattle (listed as cattle on feed) data from the Census are for cattle sold for slaughter and thus similar in weight to those for the standard. The reported manure results for beef cattle are for operations of this type only. In addition, the number of days on feed for hogs, turkeys, and broilers used for the ASABE manure standards does not take into account time between herds or flocks entering and leaving an operation; therefore, we adjusted the manure generated to account for the time between cycles. We recognize that all amounts of manure reported are estimates because amounts of manure per animal type vary by feeding programs, feeds used, climatic conditions, production techniques, and animal genetics, among other things. As feeds, animal genetics, and production techniques change in the future, these estimates might change—and may have changed since 2002—but USDA did not provide specific information on what changes have occurred and how those changes may have impacted the manure production on farms. We did not estimate the ability of the farm or surrounding farms to assimilate the manure if applied to pastures and crop land nor did we take into account various technologies to process and/or convert manure. Reported estimates of manure are for amounts produced. We did not determine whether these amounts were discharged into the air or streams and wetlands. Manure harvested from CAFOs for application to land might be less than that excreted by animals because of shrinkage due to evaporation. To provide a perspective of the amount of wastes generated by these large farms, we compared them with the amount of human sanitary waste generated in various cities. We selected certain cities on the basis of their population, as reported by the U.S. Census Bureau’s Population Estimates for 2002, and calculated the amount of sanitary waste generated by the human population of those cities by applying estimates for human sanitary waste production. Human sanitary waste includes feces and urine but does not include wastes such as water from showers, washing dishes and clothes, and flushing toilets. We found two sources of information for average daily human sanitary waste. Because these sources provided different estimates (2.68 and 4.76 pounds per person per day), we averaged the two amounts to use in our calculations of human sanitary waste produced for cities (3.72 pounds per person per day). All amounts of human sanitary waste reported are estimates because amounts will vary based on differences in age, dietary habits, activity levels, and climatic conditions, among other things. Human sanitary waste is a small portion of human discharge into sewage systems. Our reported estimates of human sanitary waste for a city are illustrative only and are not intended to be estimates of actual human sanitary waste entering a particular city’s waste treatment system. These estimates are for a population the size of selected cities assuming that the residents do not commute outside the city boundaries and that nonresidents do not enter the city for work or other reasons. To identify the findings of recent key academic, industry, and government research on the potential impacts of CAFOs on human health and the environment, and the extent to which EPA has assessed the nature and severity of such impacts, we reviewed EPA’s 2003 CAFO rule (for water impact studies) and the findings and supporting documents of the National Academy of Sciences study on air emissions from animal feeding operations (for air impact studies). In addition, we conducted library, online journal and Internet searches to identify recent studies; consulted with EPA, USDA, state agencies, industry groups, environmental groups, and academia to help identify additional studies; and identified studies through citations in previously identified studies. We only included in our review studies that (1) were peer-reviewed or produced by a federal agency, (2) were new and original research completed since 2002, (3) had a clearly defined methodology, and (4) identified pollutants found in animal waste and/or their impacts. Through this effort, we found over 200 studies and identified 68 studies that examined air and water quality issues associated with animal waste and met our criteria. We also classified these studies according to whether they found a direct link between pollutants from animal waste and impacts on human health or the environment; did not find any impacts on human health or the environment from pollutants from animal waste; found an indirect link between animal waste and human health or environmental impacts; or measured pollutants from animal waste otherwise known to cause human health or environmental impacts. The classification for each study involved two reviewers. If the reviewers disagreed on the classification, they turned to a third reviewer for resolution. Finally, we compared the findings from these studies with EPA assessments to date and interviewed EPA officials regarding these assessments. To determine the progress that EPA and states have made in regulating and controlling the air emissions of, and in developing protocols to measure, air pollutants from CAFOs, we reviewed relevant documents, interviewed officials responsible for the ongoing air monitoring study and visited several National Air Emissions Monitoring Study sites in North Carolina. Additionally, we interviewed industry and environmental groups, the umbrella association for state and local clean air agencies, and citizen groups about how EPA air emissions protocols affect them. Finally, we contacted state CAFO officials in all 50 states to determine which states had developed air emission regulations applicable to CAFOs. Officials in 47 states responded. These 47 states account for an estimated 99 percent of large animal feeding operations that could be defined as CAFOs under EPA’s 2003 rule. Finally, to determine the extent to which recent court decisions have affected EPA and the states’ ability to regulate CAFO discharges that impair water quality, we examined recent federal decisions, including the Waterkeeper Alliance Inc. v. EPA (Waterkeeper), and the Supreme Court’s 2006 decision in Rapanos v. United States. We interviewed EPA officials about how these court decisions have affected their regulations. To better understand the bases for the lawsuits and what has occurred since the court decisions, we contacted plaintiffs and defendants involved in Waterkeeper and other court cases, including industry and environmental groups. To identify the impact of these cases on states regulations, we contacted state CAFO officials in all 50 states to determine how the Waterkeeper decision affected their regulations. We asked the states if the Waterkeeper decision had affected their state’s CAFO program. Using the responses we received from 47 states, we conducted content analyses and classified them into six categories, including if the decision (1) had little impact on the state program, (2) caused the state to wait for EPA guidance (3) impaired the state program, (4) proactively changed legislation, (5) reduced the number of CAFOs with permits, or (6) other. Some officials identified more than one impact. The responses in the “other” category included such responses as “not applicable,” “because the state does not have delegated authority,” and “we have spent a large amount of time studying the ruling and commenting on EPA proposed rules that were developed to satisfy the ruling.” We conducted this performance audit between July 2007 and August 2008, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. EPA’s National Pollutant Discharge Elimination System (NPDES) permit program regulates the discharge of pollutants from point sources to waters of the United States. The Clean Water Act defines point sources to include CAFOs. To be considered a CAFO, a facility must first be defined as an animal feeding operation, which is a lot or facility (other than an aquatic animal production facility) where the following conditions are met: Animals have been, are, or will be stabled or confined and fed or maintained for a total of 45 days or more in any 12-month period. Crops, vegetation, forage growth, or post-harvest residues are not sustained in the normal growing season over any portion of the lot or facility. Generally CAFOs must meet the above definition of an animal feeding operation and stable or confine a certain minimum number of animals at the operation. EPA classifies CAFOs as large, medium, or small, based on size. Table 12 shows the number of animals at a farm that meet EPA’s definition of a large, medium, and small CAFO. In addition to size, EPA uses the following criteria to determine if a CAFO operator needs to apply for a NPDES permit. A large CAFO confines at least the number of animals described in table 12. A medium CAFO falls within the size range in table 12 and either: discharged pollutants into federally regulated waters through a manmade ditch, flushing system, or similar manmade device; discharged pollutants directly into federally regulated waters that originate outside of and pass over, across, or through the facility or otherwise come into contact with animals confined in the operation; or is designated as a CAFO by the permitting authority as a significant contributor of pollutants. A small CAFO confines the number of animals described in table 12 and has been designated as a CAFO by the permitting authority as a significant contributor of pollutants. This appendix provides our analysis of USDA’s data for trends on the number of all animal farms and the number of animals raised on large farms per day for all animal types for the period from 1982 through 2002. Pollutant(s) Ankley, Gerald T., Kathleen M. Jensen, Elizabeth A. Makynen, Michael D. Kahl, Joseph J. Korte, Michael W. Hornung, Tala R. Henry, Jeffrey S. Denny, Richard L. Leino, Vickie S. Wilson, et al. “Effects of the Androgenic Growth Promoter 17β-trenbolone on Fecundity and Reproductive Endocrinology of the Fathead Minnow.” Environmental Toxicology and Chemistry. Vol. 22, no. 6 (2003):1,350–1,360. Fertility of fish was significantly reduced by hormones and female fish developed male sex characteristics. Clark, Clifford G., Lawrence Price, Rafiq Ahmed, David L. Woodward, Pasquale L. Melito, Frank G. Rodgers, Frances Jamieson, Bruce Ciebin, Aimin Li, and Andrea Ellis. “Characterization of Waterborne Outbreak– Associated Campylobacter jejuni, Walkerton, Ontario.” Emerging Infectious Diseases. Vol. 9, no. 10 (2003):1,232-1,241. Cattle manure from a nearby farm entered the groundwater system and caused gastrointestinal illness and death in residents. Diesel, Elizabeth A., Melissa L. Wilson, Ryan Mathur, Evan Teeters, David Lehmann, and Caitlan Ziatos. “Nutrient Loading Patterns on an Agriculturally Impacted Stream System in Huntingdon County Pennsylvania Over Three Summers.” Northeastern Geology & Environmental Sciences. Vol. 29, no. 1 (2007):25-33. Excess nutrients from CAFO manure contributed significantly to impaired water quality and resulted in the inability to sustain fish populations. Hill, Dagne D., William E. Owens, and Paul B. Tchounwou. “Impact of Animal Application on Runoff Water Quality in Field Experimental Plots.” International Journal of Environmental Research and Public Health. Vol. 2, no. 2 (2005):314–321. Nutrients from manure spread on fields contributed to water pollution. Jensen, Kathleen M., Elizabeth A. Makynen, Michael D. Kahl, and Gerald T. Ankley. “Effects of the Feedlot Contaminant 17α- Trenbolone on Reproductive Endocrinology of the Fathead Minnow.” Environmental Science & Technology. Vol. 40, no. 9 (2006): 3,112- 3,117. Fertility of fish was significantly reduced by hormones and female fish developed male sex characteristics. Pollutant(s) Orlando, Edward F., Alan S. Kolok, Gerry A. Binzcik, Jennifer L. Gates, Megan K. Horton, Christy S. Lambright, L. Earl Gray, Jr., Ana M. Soto, and Louis J. Guillette, Jr. “Endocrine- Disrupting Effects of Cattle Feedlot Effluent on an Aquatic Sentinel Species, the Fathead Minnow.” Environmental Health Perspectives. Vol. 112, no. 3 (2004):353–358. University of Florida; St. Mary’s College of Maryland, University of Nebraska, EPA, Tufts University. Male fish were demasculinized and there was defeminization of female fish. Weldon, Mark B. and Keri C. Hornbuckle. “Concentrated Animal Feeding Operations, Row Crops, and Their Relationship to Nitrate in Eastern Iowa Rivers.” Environmental Science & Technology. Vol. 40, no. 10 (2006): 3,168-3,173. High concentrations of nutrients in waters are a result of CAFO manure and degrade water quality. Mathisen, T., S. G. Von Essen, T. A. Wyatt, and D. J. Romberger. “Hog Barn Dust Extract Augments Lymphocyte Adhesion to Human Airway Epithelial Cells.” Journal of Applied Physiology. Vol. 96, no. 5 (2004):1,738–1,744. Dust from hog confinement facilities induces airway inflammation. Romberger, D. J., V. Bodlak, S. G. Von Essen, T. Mathisen, and T. A. Wyatt. “Hog Barn Dust Extract Stimulates IL-8 And IL-6 Release in Human Bronchial Epithelial Cells Via PKC Activation.” Journal of Applied Physiology. Vol. 93, no. 1 (2002):289–296. Dust from hog confinement facilities induces airway inflammation. Schiffman, Susan S., Clare Studwell, Lawrence R. Landerman, Katherine Berman, and John S. Sundy. “Symptomatic Effects of Exposure to Diluted Air Sampled from a Swine Confinement Atmosphere on Healthy Human Subjects.” Environmental Health Perspectives. Vol. 113, no. 5 (2005):567-576. Short-term exposure to emissions expected downwind from a swine confinement facility can induce headaches, eye irritation, and nausea. Sigurdarson, Sigurdur T., Patrick T. O’Shaughnessy, Janet A. Watt, and Joel N. Kline. “Experimental Human Exposure Inhaled Grain Dust and Ammonia: Towards a Model of Concentrated Animal Feeding Operations.” American Journal of Industrial Medicine.Vol. 46, issue 5 (2004):345:348. Exposure to endotoxin-rich dust from CAFOs causes airflow obstruction in subjects with mild asthma. Sundblad, B-M., B-M. Larsson, L. Palmberg, and K. Larsson. “Exhaled Nitric Oxide and Bronchial Responsiveness in Healthy Subjects Exposed to Organic Dust.” European Respiratory Journal. Vol. 20, no. 2 (2002): 426–431. Airway inflammation is induced by exposure to a farming environment. Pollutant(s) Wickens, K., et. Al. “Farm Residence and Exposures and the Risk of Allergic Diseases in New Zealand Children.” Allergy. Vol. 57, no. 12 (2002): 1,171-1,179. University of Otago (New Zealand) Utrecht University (The Netherlands) There was a greater prevalence of allergic disease for children on farms. Wilson, Vickie S., Christy Lambright, Joe Ostby, and L.E. Gray, Jr. “In Vitro and in Vivo Effects of 17β-Trenbolone: A Feedlot Effluent Contaminant.” Toxicological Sciences. Vol. 70, no. 2 (2002): 202-211. Hormones found in feedlot effluent caused reproductive malformations in laboratory rats and human cells. Wyatt, Todd A., Rebecca E. Slager, Jane DeVasure, Brent W. Auvermann, Michael L. Mulhern, Susanna Von Essen, Tracy Mathisen, Anthony A. Floreani, and Debra J. Romberger. “Feedlot Dust Stimulation of Interleukin-6 And 8 Requires Protein Kinase C- Epsilon Human Bronchial Epithelial Cells.” American Journal of Physiology-Lung Cellular and Molecular Physiology. Vol. 293, no. 5 (2007):1,163-1,170. Dust extract from cattle feedlots stimulates airway inflammation at concentrations found downwind from the operation. Hill, Dagne D., William E. Owens, and Paul B. Tchounwou. “Prevalence of Escherichia coli O157:H7 Bacterial Infections Associated With the Use of Animal Wastes in Louisiana for the Period 1996-2004.” International Journal of Environmental Research and Public Health. Vol. 3, no. 1 (2006): 107-113. Escherichia coli (not measured) Although some of the parishes surveyed had large amounts of animal waste generated each year, statistics did not show a correlations with Escherichia coli 0157:H7 bacterial infections. Hill, Dagne D., William E. Owens, and Paul B. Tchounwou. “Prevalence of Selected Bacterial Infections Associated with the Use of Animal Waste in Louisiana.” International Journal of Environmental Research and Public Health. Vol. 2, no. 1 (2005): 84–93. Escherichia coli (not measured) Although the four parishes surveyed had large amounts of animal waste generated, statistics does not show a correlation between this and bacterial infections. Krapac, I.G., W.S. Dey, W.R. Roy, C.A. Smyth, E. Storment, S.L. Sargent, and J.D. Steele. “Impacts of Swine Manure Pits on Groundwater Quality.” Environmental Pollution. Vol. 120, issue 2 (2002): 475-492. Groundwater near swine CAFOs has not been significantly impacted. Mugel, Douglas N. “Ground-Water Quality and Effects of Poultry Confined Animal Feeding Operations on Shallow Ground Water, Upper Shoal Creek Basin, Southwest Missouri, 2000.” U.S. Geological Survey Water- Resources Investigations Report 02-4125 (2002). The results do not indicate that poultry CAFOs are affecting the shallow ground water with respect to nutrients and fecal bacteria. Pollutant(s) Braun-Fahrlander, Charlotte, Josef Riedler, Udo Herz, Waltraud Eder, Marco Waster, Leticia Grize, Soyoun Maisch, David Carr, Florian Gerlach, Albrecht Bufe. “Environmental Exposure to Endotoxin and its Relation to Asthma in School-Age Children.” The New England Journal of Medicine. Vol. 347, no. 12 (2002): 869-877. Institute of Social and Preventive Medicine (Switzerland), Children’s Hospital (Austria), Philipps University (Germany), Ruhr University (Germany), University Children’s Hospital (Switzerland), University of Munich (Germany) Decreased risk of hay fever, asthma, and wheeze in children exposed to high levels of endotoxin in dust. Elliott, L., K. Yeatts, and D. Loomis. “Ecological Associations Between Asthma Prevalence And Potential Exposure to Farming.” European Respiratory Journal. Vol. 24, no. 6 (2004): 938–941. Findings are consistent with the hypothesis that certain farm exposures are protective against childhood asthma. McGinn, S. M., H. H. Janzen, and T. Coates. “Atmospheric Pollutants and Trace Gases: Atmospheric Ammonia, Volatile Fatty Acids, and Other Odorants near Beef Feedlots.” Journal of Environmental Quality. Vol. 32, no. 4 (2003):1,173–1,182. Odorants from feedlots were effectively dispersed. Emitted ammonia was deposited to the soil downwind. Studies showing an indirect link between pollutants and impacts Valcour, James E., Pascal Michel, Scott A. McEwen, and Jeffrey B. Wilson. “Associations between Indicators of Livestock Farming Intensity and Incidence of Human Shiga Toxin- Producing Escherichia coli Infection.” Emerging Infectious Diseases. Vol. 8, no. 3 (2002): 252-257. Escherichia coli (not measured) The strongest associations with human Escherichia coli infection were the ratio of beef cattle to human population and the application of manure to the surface of agricultural land by a solid spreader and by a liquid spreader. Wing, Steve, Stephanie Freedman, and Lawrence Band. “The Potential Impact of Flooding on Confined Animal Feeding Operations in Eastern North Carolina.” Environmental Health Perspectives. Vol. 110, no. 4 (2002): 387–391. Flood events have a significant potential to degrade environmental health because of dispersion of wastes from industrial animal operations in areas with vulnerable populations. Avery, Rachel C., Steve Wing, Stephen W. Marshall, and Susan S. Schiffman. “Odor from Industrial Hog Farming Operations and Mucosal Immune Function in Neighbors.” Archives of Environmental Health. Vol. 59, no. 2 (2004): 101-108. This study suggests that malodor from industrial swine operations can affect the secretory immune system, although the reduced levels reported are still within normal range. Pollutant(s) Bullers, Susan. “Environmental Stressors, Perceived Control, and Health: The Case of Residents Near Large-Scale Hog Farms in Eastern North Carolina.” Human Ecology. Vol. 33, no. 1 (2005): 1-16. Residents living near large- scale hog farms in eastern North Carolina report symptoms related to respiratory and sinus problems and nausea. Chénard, Liliane, Ambikaipakan Senthilselvan, Vaneeta K. Grover, Shelley P. Kirychuk, Joshua A. Lawson, Thomas S. Hurst, and James A. Dosman. “Lung Function and Farm Size Predict Healthy Worker Effect in Swine Farmers.” Chest. Vol. 131, no. 1 (2007): 245- 254. Some swine workers are less affected by swine air and continue in the profession. Other workers are more affected. Chrischilles, Elizabeth, Richard Ahrens, Angela Kuehl, Kevin Kelly, Peter Thorne, Leon Burmeister, and James Merchant. “Asthma Prevalence and Morbidity Among Rural Iowa Schoolchildren.” Journal of Allergy and Clinical Immunology. Vol. 113, no. 1 (2004): 66-71. Among children who wheeze, farm and nonfarm children were equally likely to have been given a diagnosis of asthma and had comparable morbidity. Dosman, J.A., J.A. Lawson, S.P. Kirychuk, Y. Cormier, J. Biem, and N. Koehncke. “Occupational Asthma in Newly Employed Workers in Intensive Swine Confinement Facilities.” European Respiratory Journal. Vol. 24, no. 6 (2004): 698–702. Institute of Agricultural Rural and Environmental Health, University of Saskatchewan (Canada), Laval University (Canada) Newly employed workers in intensive swine confinement facilities reported development of acute onset of wheezing and cough suggestive of asthma. Merchant, James A., Allison L. Naleway, Erik R. Svendsen, Kevin M. Kelly, Leon F. Burmeister, Ann M. Stromquist, Craig D. Taylor, Peter S. Thorne, Stephen J. Reynolds, Wayne T. Sanderson, and Elizabeth A. Chrischilles. “Asthma and Farm Exposures in a Cohort of Rural lowa Children.” Environmental Health Perspectives. Vol. 113, No. 3 (2005): 350-356. There was a high prevalence of asthma health outcome among farm children living on farms that raise swine and raise swine and add antibiotics. Mirabelli, Maria C., Steve Wing, Stephen W. Marshall, and Timothy C. Wilcosky. “Asthma Symptoms Among Adolescents Who Attend Public Schools That Are Located Near Confined Swine Feeding Operations.” Pediatrics. Vol. 118, no. 1 (2006): 66-75. Estimated exposure to airborne pollution from confined swine feeding operations is associated with adolescents’ wheezing symptoms. Palmberg, Lena, Britt-Marie Larsson, Per Malmberg, and Kjell Larsson. “Airway Responses of Healthy Farmers and Nonfarmers to Exposure in a Swine Confinement Building.” Scandinavian Journal of Work, Environment, and Health. Vol. 28, no. 4 (2002): 256-263. National Institute of Environmental Medicine (Sweden), National Institute for Working Life (Sweden) Altered lung function and bronchial responsiveness was found in nonfarming subjects. Only minor alterations were found in the farmers. Pollutant(s) Radon, Katja, Anja Schulze, Vera Ehrenstein, Rob T. van Strien, Georg Praml, and Dennis Nowak. “Environmental Exposure to Confined Animal Feeding Operations and Respiratory Health of Neighboring Residents.” Epidemiology. Vol. 18, no. 3 (2007): 300-308. Respiratory disease was found among resident living near confined animal feeding operations. Sigurdarson, Sigurdur T. and Joel N. Kline. “School Proximity to Concentrated Animal Feeding Operations and Prevalence of Asthma in Students.” Chest. Vol. 129, no. 6 (2006):1,486–1,491. Children in the study school, located one-half mile from a CAFO, had a significantly increased prevalence of physician-diagnosed asthma. Anderson, M.E. and M.D. Sobsey. “Detection And Occurrence of Antimicrobially Resistant E. Coli In Groundwater on or Near Swine Farms In Eastern North Carolina.” Water Science & Technology. Vol. 54, no. 3 (2006): 211-218. Antibiotic-resistant E. coli strains are present in groundwaters of swine farms. Batt, Angela L., Daniel D. Snow, and Diana S. Aga. “Occurrence of Sulfonamide Antimicrobials in Private Water Wells in Washington Country, Idaho, USA.” Chemosphere. Vol. 64, issue 11 (2006): 1,963- 1,971. All six sampled wells were contaminated by veterinary antimicrobials and had elevated concentrations of nitrate and ammonium. Three wells had nitrate levels exceeding EPA thresholds. Campagnolo, Enzo R., Kammy R. Johnson, Adam Karpati, Carol S. Rubin, Dana W. Kolpin, Michael T. Meyer, J. Emilio Esteban, Russell W. Currier, Kathleen Smith, Kendall M. Thu, and Michael McGeehin. “Antimicrobial Residues in Animal Waste and Water Resources Proximal to Large-Scale Swine and Poultry Feeding Operations.” The Science of the Total Environment. Vol. 299, no. 1 (2002): 89-95. CDC, U.S. Geological Survey, Iowa Department of Public Health, Ohio Department of Health, University of Iowa antimicrobial compounds were detected in surface and groundwater samples collected proximal to the swine and poultry farms. Durhan, Elizabeth J., Christy S. Lambright, Elizabeth A. Makynen, James Lazorchak, Phillip C. Hartig, Vickie S. Wilson, L. Earl Gray, and Gerald T. Ankley. “Identification of Metabolites of Trenbolone Acetate in Androgenic Runoff from a Beef Feedlot.” Environmental Health Perspectives. Vol. 114, supp. 1 (2006):65–68. Whole-water samples from the discharge contained detectible concentrations of hormones. Pollutant(s) Gessel, Peter D., Neil C. Hansen, Sagar M. Goyal, Lee J. Johnston, and Judy Webb. “Persistence Of Zoonotic Pathogens in Surface Soil Treated With Different Rates of Liquid Pig Manure.” Applied Soil Ecology. Vol. 25, issue 23 (2004): 237-243. Manure application rate was correlated positively with the persistence of fecal indicators but did not relate to survival of indicators with short survival times. Haggard, Brian E. , Paul B. DeLaune, Douglas R. Smith, and Philip A. Moore, Jr. “Nutrient and B17-Estradiol Loss in Runoff Water From Poultry Litters.” Journal of the American Water Resources Association. Vol. 41, no. 2 (2005):245-256. In general, poultry litter applications increased nutrient and hormone concentrations in runoff water. Hutchins, Stephen R., Mark V. White, Felisa M. Hudson, and Dennis D. Fine. “Analysis of Lagoon Samples from Different Concentrated Animal Feeding Operations for Estrogens and Estrogen Conjugates.” Environmental Science & Technology. Vol. 41, no. 3 (2007): 738-744. Estrogen conjugates contribute significantly to the overall estrogen load, even in different types of CAFO lagoons. Koike, S., I.G. Krapac, H.D. Oliver, A.C. Yannarell, J.C. Chee-Sanford, R.I. Aminov, and R.I. Makie. “Monitoring and Source Tracking of Tetracycline Resistance Genes in Lagoons and Groundwater Adjacent to Swine Production Facilities over a 3-Year Period.” Applied and Environmental Microbiology. Vol. 73, no. 15 (2007): 4,813-4,823. University of Illinois, USDA, Illinois State Geological Survey, Rowett Research Institute (UK) Antibiotic resistance genes in groundwater are affected by swine manure and also part of the indigenous gene pool. Miller, David H. and Gerald T. Ankley. “Modeling Impacts On Populations: Fathead Minnow (Pimephales Promelas) Exposure to the Endocrine Disruptor 17ß-Trenbolone as a Case Study.” Ecotoxicology and Environmental Safety. Vol. 59, issue 1 (2004): 1-9. Model shows that if fathead minnow is exposed to continuous concentrations of hormone, there will be a risk of extinction. Nelson, Nathan O., John E. Parsons, and Robert L. Mikkelsen. “Field-Scale Evaluation of Phosphorus Leaching in Acid Sandy Soils Receiving Swine Waste.” Journal of Environmental Quality. Vol. 34, no. 6 (2005): 2,024-2,035. The results show that substantial quantities of phosphorus can be leached through soils with low phosphorus sorption capacities. Peak, Nicholas, Knapp, Charles W, Richard K. Yang, Margery M. Hanfelt, Marilyn S. Smith, Diana S. Aga, and David W. Graham. “Abundance of Six Tetracycline Resistance Genes in Wastewater Lagoons at Cattle Feedlots With Different Antibiotic Use Strategies.” Environmental Microbiology. Vol. 9, no. 1 (2007): 143-151. CAFOs using larger amounts of antibiotics had significantly higher detected resistance gene levels. Pollutant(s) Sapkota, Amy R., Frank C. Curriero, Kristen E. Gibson, and Kellogg J. Schwab. “Antibiotic- Resistant Enterococci and Fecal Indicators in Surface Water and Groundwater Impacted by a Concentrated Swine Feeding Operation.” Environmental Health Perspectives. Vol. 115, no. 7 (2007):1,040–1,045. Detected elevated levels of fecal indicators and antibiotic-resistant bacteria in water sources down gradient from a swine facility. Soto, Ana M., Janine M. Calabro, Nancy V. Prechtl, Alice Y. Yau, Edward F. Orlando, Andreas Daxenberger, Alan S. Kolok, Louis J. Guillette, Jr., Bruno le Bizec, Iris G. Lange, and Carlos Sonnenschein. “Androgenic and Estrogenic Activity in Water Bodies Receiving Cattle Feedlot Effluent in Eastern Nebraska, USA.” Environmental Health Perspectives. Vol. 112, no. 3 (2004):346–352. Feedlot effluents contain sufficient levels of hormonally active agents to warrant further investigation of possible effects on aquatic ecosystem health. Thorsten, Christiana, Rudolf J. Schneider, Harald A. Farber, Dirk Skutlarek, Michael T. Meyer, and Heiner E. Goldbach. “Determination of Antibiotic Residues in Manure, Soil, and Surface Waters.” Acta hydrochimica et hydrobiologica. Vol. 31, no. 1 (2003):36–44. In each of the surface waters tested antibiotics could be detected. Thurston-Enriquez, Jeanette A., John E. Gilley, and Bahman Eghball. “Microbial Quality of Runoff Following Land Application of Cattle Manure And Swine Slurry.” Journal of Water and Health. vol. 3, no. 2 (2005): 157-171. Large microbial loads could be released via heavy precipitation events and could have a significant impact on water bodies. Toetz, Dale. “Nitrate in Ground and Surface Waters in the Vicinity of a Concentrated Animal Feeding Operation.” Archives of Hydrobiology. Vol. 166, no. 1 (2006): 67-77. Drinking water was contaminated with CAFOs as the suspected source. U.S. Department of Interior. U.S. Geological Survey. In cooperation with U.S. Environmental Protection Agency, National Exposure Research Laboratory. Geochemistry and Characteristics of Nitrogen Transport at a Confined Animal Feeding Operations in a Coastal Plain Agricultural Watershed, and Implications for Nutrient Loading in the Neuse River Basin, North Carolina, 1999-2002. Scientific Investigations Report 2004-5283, Reston, Va.: (2004). Large amounts of nitrogen moving in the estuary as a result of extreme events may potentially cause algal growths. Pollutant(s) United State Geological Survey in cooperation with Virginia Department of Health. Water- Quality Data from Ground- and Surface-Water Sites near Concentrated Animal Feeding Operations (CAFOs) and non-CAFOs in the Shenandoah Valley and Eastern Shore of Virginia, January-February, 2004. Reston, Va (2005). United States Geological Survey. Fractionation and Characterization of Organic Matter in Wastewater from a Swine Waste-Retention Basin. Scientific Investigations Report 2004- 5217 (2004). The bulk of the organic matter consists of microbial cellular constituents and their degradation products. Chapin, Amy, Ana Rule, Kristen Gibson, Timothy Buckley, and Kellogg Schwab. “Airborne Multidrug-Resistant Bacteria Isolated from a Concentrated Swine Feeding Operation.” Environmental Health Perspectives. Vol. 113, no. 2 (2005):137-142. Multidrug-resistant bacterial pathogens were detected in the air of a swine CAFO. Donham, Kelley. J., Joung Ae Lee, Kendall Thu, and Stephen J. Reynolds. “Assessment of Air Quality at Neighbor Residences in the Vicinity Of Swine Production Facilities.” Journal of Agromedicine. Vol. 11, no. 3-4 (2006): 15-24. Average concentration of hydrogen sulfide exceeded EPA recommended community standards in all three areas assessed. Gibbs, Shawn G., Christopher F. Green, Patrick M. Tarwater, Linda C. Mota, Kristina D. Mena, and Pasquale V. Scarpino. “Isolation of Antibiotic-Resistant Bacteria from the Air Plume Downwind of a Swine Confined or Concentrated Animal Feeding Operation.” Environmental Health Perspectives. Vol. 114, no. 7 (2006):1,032–1,037. Bacterial concentrations with multiple antibiotic resistances or multidrug resistance were recovered inside and outside to 150 m downwind of a facility, even after antibiotic use was discontinued. Harper, Lowry A., Ron R. Sharpe, Tim B. Parkin, Alex De Visscher, Oswald van Cleemput, and F. Michael Byers. “Nitrogen Cycling through Swine Production Systems: Ammonia, Dinitrogen, and Nitrous Oxide Emissions.” Journal of Environmental Quality. Vol. 33, no. 4 (2004): 1,189-1,201. USDA, Ghent University (Belgium) In contrast with previous and current estimates of ammonia emissions from CAFOs, this study found smaller ammonia emissions from animal housing, lagoons, and fields. Hamscher, Gerd, Heike Theresia Pawelzick, Silke Sczesny, Heinz Nau, and Jörg Hartung. “Antibiotics in Dust Originating from a Pig- Fattening Farm: A New Source of Health Hazard for Farmers?” Environmental Health Perspectives. Vol. 111, no. 13 (2003):1,590– 1,594. Five different antibiotics were detected in dust samples swine feeding operation. Pollutant(s) Hoff, Steven J., Dwaine S. Bundy, Minda A. Nelson, Brian C. Zelle, Larry D. Jacobson, Albert J. Heber, Jinqin Ni, Yuanhui Zhang, Jacek A. Koziel, and David B. Beasley. “Emissions of Ammonia, Hydrogen Sulfide, and Odor before, during, and after Slurry Removal from a Deep-Pit Swine Finisher.” Journal of the Air & Waste Management Association. Vol. 56, no. 5 (2006): 581-590. Emissions of ammonia, hydrogen sulfide, and odor had large increases during slurry removal. A slurry removal even will result in acute exposure for animals and workers. O’Connor, Rod, Mark O’Connor, Kurt Irgolic, Justin Sabrsula, Hakan Gurleyuk, Robert Brunette, Crystal Howard, Jennifer Garcia, John Brien, June Brien, and Jessica Brien. “Transformations, Air Transport, and Human Impact of Arsenic from Poultry Litter.” Environmental Forensics. Vol. 6, no. 1 (2005): 83-89. Levels of arsenic found in homes. This could represent a significant health risk. Radon, Katja, Brigitta Danuser, Martin Iversen, Eduard Monso, Christoph Weber, Jorg Hartung, Kelley J. Donham, Urban Palmgren, and Dennis Nowak. “Air Contaminants in Different European Farming Environments.” Annals of Agriculture and Environmental Medicine. Vol. 9, no. 1 (2002): 41-48. Ludwig-Maximilians- University (Germany), Swiss Federal Institute of Technology, Aarhus University Hospital (Denmark), Hospital Germans Trial I Pujol (Spain), School of Veterinary Medicine (Germany), University of Iowa, Pegasus Labor GmbH (Germany) The exposure level found in this study might put the farmers at risk from respiratory diseases. Razote, E.B., R.G. Maghirang, B.Z. Predicala, J.P. Murphy, B.W. Auvermann, J.P. Harner III, and W.L. Hargrove. “Laboratory Evaluation of the Dust-Emission Potential of Cattle Feedlot Surfaces.” Transactions of the ASABE. Vol. 49, no. 4 (2006): 1,117-1,124. Robarge, Wayne P., John T. Walker, Ronald B. McCulloch, and George Murray. “Atmospheric Concentrations of Ammonia and Ammonium at an Agricultural Site in the Southeast United States.” Atmospheric Environment. Vol. 36, no. 10 (2002): 1,661- 1,674. Elevated ambient ammonia concentrations near an agricultural site. United State Environmental Protection Agency. National Emission Inventory – Ammonia Emissions from Animal Husbandry Operations, Draft Report. Washington, D.C. (2004). Pollutant(s) Walker, J.T., W.P. Robarge, Y. Wu, and T.P. Meyers. “Measurement of Bi-Directional Ammonia Fluxes Over Soybean Using Themodified Bowen-Ratio Technique.” Agricultural and Forest Meteorology. Vol. 138, no. 1-4 (2006): 54-68. In general, the net deposition flux was lower than expected. Walker, John T., Wayne P. Robarge, Arun Shendrikar, and Hoke Kimball. “Inorganic Pm2.5 at a U.S. Agricultural Site.” Environmental Pollution. Vol. 139, no. 2 (2006): 258-271. Model results show that reductions in atmospheric ammonia will have minimal effect on organic PM2.5 during summer and a moderate effect during winter. Walker, J.T., Dave R. Whitall, Wayne P. Robarge, and Hans W. Pearl. “Ambient Ammonia and Ammonium Aerosol Across a Region of Variable Ammonia Emission Density.” Atmospheric Environment. Vol. 38, no. 9 (2004): 1,235-1,246. Agricultural ammonia emissions influence local ambient concentrations of ammonia and PM2.5. Wilson, Sacoby M. and Marc L. Serre. “Examination of Atmospheric Ammonia Levels Near Hog Cafos, Homes, and Schools In Eastern North Carolina.” Atmospheric Environment. Vol. 41, issue 23 (2007): 4,977– 4,987. Distance to one or more CAFOs is the key variable in controlling atmospheric ammonia at the community level in Eastern N.C. Muller-Suur, C., P.H. Larsson, K. Larsson, J. Grunewald. “Lymphocyte Activation After Exposure to Swine Dust: A Role Of Humoral Mediators and Phagocytic Cells.” European Respiratory Journal. Vol. 19, issue 1 (2002): 104-107. About immune system response. Charavaryamath, Chandrashekhar, Kyathanahalli S Janardhan, Hugh G Townsend, Philip Willson, and Baljit Singh. “Multiple Exposures to Swine Barn Air Induce Lung Inflammation and Airway Hyper- Responsiveness.” Respiratory Research. Vol. 6, no. 1 (2005):50-66. Does not address human impacts. Eduard, Wijnand, Ernst Omenaas, Per Sigvald Bakke, Jeroen Douwes, and Dick Heederik. “Atopic and Non-atopic Asthma in a Farming and a General Population.” American Journal of Industrial Medicine. Vol. 46, issue 4 (2004): 396-399. National Institute of Ocupational Health (Norway), University of Bergen (Norway), University of Wellington (New Zealand) Protective effect of the farm environment on asthma. In addition to the individual named above, Sherry L. McDonald, Assistant Director; Kevin Bray; Yecenia C. Camarillo; Wendy Dye; Paul Hobart; Cathy Hurley; Holly L. Sasso; James W. Turkett; and Greg Wilmoth made key contributions to this report. Also contributing to this report were Elizabeth Beardsley, Ben N. Shouse, and Carol Herrnstadt Shulman. | Concentrated Animal Feeding Operations (CAFO) are large livestock and poultry operations that raise animals in a confined situation. CAFOs can improve the efficiency of animal production but large amounts of manure produce can, if not properly managed, degrade air and water quality. The Environmental Protection Agency (EPA) is responsible for regulating CAFOs and requires CAFOs that discharge certain pollutants to obtain a permit. This report discusses the (1) trends in CAFOs over the past 30 years, (2) amounts of waste they generate, (3) findings of key research on CAFOs' health and environmental impacts, (4) EPA's progress in developing CAFO air emissions protocols, and (5) effect of recent court decisions on EPA's regulation of CAFO water pollutants. GAO analyzed U.S. Department of Agriculture's (USDA) data from 1982 through 2002, for large farms as a proxy for CAFOs; reviewed studies, EPA documents, laws, and regulations; and obtained the views of federal and state officials. Because no federal agency collects consistent, reliable data on CAFOs, GAO could not determine the trends in these operations over the past 30 years. However, using USDA data for large farms that raise animals as a proxy for CAFOs, it appears that the number of these operations increased by about 230 percent, going from about 3,600 in 1982 to almost 12,000 in 2002. Also, during this 20-year period the number of animals per farm had increased, although it varied by animal type. Moreover, GAO found that EPA does not have comprehensive, accurate information on the number of permitted CAFOs nationwide. As a result, EPA does not have the information it needs to effectively regulate these CAFOs. EPA is currently working with the states to establish a new national data system. The amount of manure generated by large farms that raise animals depends on the type and number of animals raised, but large operations can produce more than 1.6 million tons of manure a year. Some large farms that raise animals can generate more raw waste than the populations of some U.S. cities produce annually. In addition, according to some agricultural experts, the clustering of large operations in certain geographic areas may result in large amounts of manure that cannot be effectively used as fertilizer on adjacent cropland and could increase the potential of pollutants reaching nearby waters and degrading water quality. Since 2002, at least 68 government-sponsored or peer-reviewed studies have been completed that examined air and water quality issues associated with animal feeding operations and 15 have directly linked air and water pollutants from animal waste to specific health or environmental impacts. EPA has not yet assessed the extent to which these pollutants may be impairing human health and the environment because it lacks key data on the amount of pollutants that are being emitted from animal feeding operations. As a first step in developing air emissions protocols for animal feeding operations, in 2007, a 2-year nationwide air emissions monitoring study, largely funded by industry, was initiated. However, as currently structured, the study may not provide the scientific and statistically valid data it was intended to provide and that EPA needs to develop air emissions protocols. Furthermore, EPA has not established a strategy or timetable for developing a more sophisticated process-based model that considers the interaction and implications of all emission sources at an animal feeding operation. Two recent federal court decisions have affected EPA's ability to regulate water pollutants discharged by CAFOs. The 2005 Waterkeeper case required EPA to abandon the approach that it had proposed in 2003 for regulating CAFO water discharges. Similarly, the 2006 Rapanos case has complicated EPA's enforcement of CAFO discharges because EPA believes that it must now gather significantly more evidence to establish which waters are subject to the Clean Water Act's permitting requirements. |
Health systems may use a variety of financial incentive programs to encourage improvements in the quality and efficiency of health care delivery. The payment of rewards to physicians, however, creates financial relationships that may implicate, that is, give rise to concern under, federal fraud and abuse laws designed to protect against undue influences on medical judgment. Health systems may offer a variety of financial incentive programs to encourage improvements in quality and efficiency, including those that help align incentives between hospitals and physicians. Health systems can use pay-for-performance programs to reward physicians for adherence to clinical protocols or objective improvement in individual patient care outcomes. They can also use shared savings programs to align physician incentives with those of hospitals by offering physicians a percentage of the hospitals’ cost savings attributable to the physicians’ efforts in controlling the costs and improving or maintaining the quality of patient care. These are often referred to as gainsharing arrangements. Although results from financial incentive programs tried to date have been mixed, some experts believe they have the potential to increase quality and efficiency. explicit goals of quality improvement rather than efficiency improvement, these programs can improve quality and efficiency by rewarding physicians for adhering to clinical protocols. For example, these programs may result in savings for Medicare if the programs lead to better patient health outcomes, fewer medical interventions, and a reduction in the provision of services that are not medically necessary. Similarly, shared savings programs that reward physicians for using less expensive hospital supplies may result in savings for Medicare by lowering hospital costs. Specifically, shared savings programs, if implemented on a broad scale, could lower hospital costs sufficiently to reduce Medicare’s hospital payments. While pay-for-performance programs tend to have The availability of financial incentives, however, may affect a physician’s judgment, introducing a profit motive that may lead to inappropriate referrals or reductions or limitations in services. In this respect, financial incentive programs may implicate federal fraud and abuse laws designed to protect patients and the integrity of the Medicare program. In its January 2012 issue brief on programs tested by CMS, the Congressional Budget Office examined, in part, independent evaluations of four CMS programs where health care providers were given financial incentives to improve the quality and efficiency of care rather than payments based strictly on the volume and intensity of services delivered. The Congressional Budget Office concluded that results of these four programs were mixed. In one program where payments were bundled to cover all hospital and physicians services for heart bypass surgeries, Medicare spending was reduced by 10 percent, and there were no apparent adverse effects on patients’ outcomes. The remaining three programs appeared to have produced little or no savings for Medicare. Of these three programs, two slightly improved quality of care based on the measures adopted for the program. The third program had little or no effect on Medicare spending or quality in its first year. Federal fraud and abuse laws designed to protect the integrity of services that are reimbursed under federal health care programs, including Medicare, regulate certain types of conduct, including financial relationships that may influence the delivery of care. Health systems must operate within the framework of federal fraud and abuse laws when designing and implementing financial incentive programs. Table 1, which follows the section on advisory opinion authority, summarizes the federal fraud and abuse laws and enforcement mechanisms. The Stark law and its implementing regulations prohibit physicians from making referrals for certain “designated health services” paid for by Medicare, including hospital services, to entities with which the physicians (or their immediate family members) have a financial relationship, unless the arrangement satisfies a statutory or regulatory exception. Studies have found that these self-referrals encouraged overutilization and increased health costs. The Stark law also prohibits these entities that perform the designated health services from presenting, or causing to be presented, claims to Medicare or billing any individual, third-party payer, or other entity for these services. The Stark law includes a number of exceptions and authorizes the Secretary of HHS to create regulatory exceptions for financial relationships that do not pose a risk of patient or program abuse. The Stark law was enacted to prevent physicians from referring patients and ordering tests and services that may be unnecessary—and result in overutilization—for the purpose of financial gain. Financial incentive programs implicate the Stark law because they create a financial relationship between the entity paying the incentive and the physician who receives it, which could give the physician an incentive to refer patients to that entity. The Stark law prohibits physicians from making referrals to entities with which they or their immediate family members have a financial relationship, regardless of whether that relationship is intended to result in these referrals. In this regard, the Stark law is a strict liability statute. Those physicians or health systems that violate the Stark law by either making prohibited referrals or billing for the services for which the referral was made may be subject to a number of sanctions. Any amounts received for claims in violation of the Stark law must be refunded. Those who know or should know that they are submitting (or causing to be submitted) a claim in violation of the Stark law may be subject to civil monetary penalties of up to $15,000 for each service, an assessment of three times the amount claimed, and exclusion from federal health care CMS is responsible for issuing regulations under the Stark programs. law and collecting payments made in violation of the law. OIG is responsible for enforcing the Stark law’s civil monetary penalties. Civil monetary penalties of up to $100,000 may be imposed on those who enter into arrangements that they know or should know have the principal purpose of assuring referrals that would violate the Stark law if made directly. The anti-kickback statute makes it a criminal offense for anyone to knowingly and willfully solicit, receive, offer, or pay any remuneration to induce or reward referrals of items or services reimbursable under Medicare, subject to statutory exceptions and regulatory safe harbors promulgated by OIG. The law helps to limit the potential for money to influence providers’ health care decisions, and, in this respect, helps to prevent overutilization of services, the provision of unnecessary or substandard services, and the inappropriate steering of patients. A financial incentive program under which a hospital paid physicians who referred patients for admission would implicate the anti-kickback statute. Unlike the Stark law, the anti-kickback statute is intent-based; the action must be knowing and willful. Penalties under the anti-kickback statute include imprisonment for up to 5 years and criminal fines of up to $25,000. In addition, those individuals and entities violating the anti- kickback statute are subject to civil penalties of up to $50,000 per act, an assessment of three times the remuneration, and exclusion from participation in federal health care programs. OIG and DOJ are charged with enforcing the anti-kickback statute. OIG is responsible for issuing regulatory safe harbors under the anti-kickback statute and, as under the Stark law, has administrative enforcement responsibilities. DOJ prosecutes cases under the anti-kickback statute. In addition to providing for the imposition of civil monetary penalties for certain enumerated activities, such as knowingly presenting a Medicare claim that is part of a pattern of claims for items or services that a person knows are not medically necessary, the CMP law provides penalties for hospitals that knowingly make an indirect or direct payment to a physician as an inducement to reduce or limit services to hospital patients, and for physicians who accept such payments. The statute does not contain exceptions for this prohibition and does not authorize OIG to establish exceptions by regulation. Like the Stark law and the anti-kickback statute, the CMP law reflects congressional concern that incentive payments may create a conflict of interest that may limit the ability of the physician to exercise independent professional judgment in the best interest of the patient. Financial incentive programs that reward physicians with a share of hospital cost-savings realized through a reduction or limitation of items and services implicate the CMP law. In addition, payments from a hospital to a physician designed to reward quality that lead to a reduction or limitation of services furnished to hospital patients also implicate the CMP law. Hospitals or physicians who violate the CMP law are subject to civil penalties of up to $2,000 per patient covered by the payments, and exclusion from participation in federal health care programs. OIG is responsible for enforcing the CMP law. The False Claims Act (FCA) serves as another enforcement mechanism for federal fraud and abuse laws. Claims that are submitted in violation of the Stark law or the anti-kickback statute may also be considered false claims and, as a result, create additional liability under the FCA. FCA prohibits certain actions, including the knowing presentation of a false claim for payment by the federal government. For example, a financial incentive program under which a hospital submitted a claim to Medicare for a service provided by a physician when the physician and hospital had a financial relationship in violation of the Stark law, would implicate the FCA if the requisite intent were present. Those who violate the FCA are liable for a civil penalty of not less than $5,000 and not more than $10,000, as adjusted by inflation, plus three times the amount of damages the government sustains, though the court may reduce damages. 31 U.S.C. § 3729(a)(1)-(2). Violators are also liable for the cost of the action. 31 U.S.C. § 3729(a)(3). alleging the submission of false claims and these “whistleblowers” can receive between 15 and 30 percent of a monetary settlement or recovery plus expenses and attorneys’ fees and costs. In response to requests for specific guidance from providers on whether an existing or proposed financial arrangement, including a financial incentive program, violates the fraud and abuse laws, CMS and OIG have the statutory authority to issue advisory opinions. CMS is required to issue advisory opinions on the Stark law, and OIG is required to issue advisory opinions on the CMP law and the anti-kickback statute, among other matters. Advisory opinions are issued only in response to a request regarding an existing or proposed arrangement to which the requester is a party. Advisory opinions are binding on the Secretary of HHS and the individual or entity requesting the opinion; no other parties can rely on an advisory opinion. The time between when CMS and OIG receive an advisory opinion request and when the advisory opinion is released can depend on, for example, the information contained in the request and the amount of time needed for the agencies to obtain additional information from the requester. Requesters must submit certified written requests that include information specified in regulations. If the initial request for an advisory opinion does not contain all the information the agencies need, the agencies may request whatever additional information is necessary to respond to the request. When requesting an advisory opinion, requesters must agree to pay all costs the agencies incur in responding to the request.has 10 days to notify the requesters whether their requests have been formally accepted or declined or whether additional information is needed. Once a request has been accepted, CMS has 90 days and OIG has 60 days to respond, with certain exceptions. Certain financial incentive programs are permitted within the framework of federal fraud and abuse laws through various Stark law and anti-kickback statute exceptions and safe harbors, respectively or because they do not implicate one or more of the laws in the first instance. OIG has interpreted the CMP law to prohibit hospitals from rewarding the reduction or limitation of services, but permits certain financial incentive programs through its advisory opinion process. However, stakeholders we spoke with reported that the laws, regulations, and agency guidance have created challenges for financial incentive program design and implementation, and some health systems have terminated or refrained from implementing these programs. Neither OIG nor DOJ took any enforcement actions against financial incentive programs in fiscal years 2005 through 2010. CMS and OIG have acknowledged new exceptions and safe harbors may be necessary to facilitate financial incentive programs. CMS has acknowledged that existing Stark law exceptions may not be sufficiently flexible to encourage a wider array of nonabusive and beneficial incentive programs that both promote quality and achieve cost savings. CMS can create additional exceptions as long as the exception does not pose a risk of program or patient abuse. According to CMS officials, this “no risk” requirement is high and limits their ability to create new regulatory exceptions to the Stark law. In 2008 CMS attempted to use its authority to propose a new exception covering financial incentive programs. However, the “no risk” requirement necessitated a narrow exception with many structural safeguards in light of the risk that financial incentive programs could be used to disguise payments for referrals or adversely affect patient care. In its proposed rule, CMS noted that the design of the proposed exception created a challenge in providing broad flexibility for innovative, effective programs while at the same time protecting the Medicare program and patients from abuses. The agency solicited comments, and many of the comments it received criticized the number and complexity of safeguards needed to achieve the “no risk” standard. To date, the agency has taken no further action to finalize this regulatory exception, and CMS officials told us the agency has no plans to do so in the near future. Similarly, OIG officials told us that they recognize that industry innovation may be significant enough to warrant new anti- kickback safe harbors, and the agency annually solicits input from providers on potential safe harbors, as required by statute. Financial incentive programs could be structured so that they do not implicate one or more of these laws. For example, HHS stated that a program limited to commercial patients might not implicate any of the laws. exception. Employed physicians are rewarded for meeting certain clinical outcome quality measures, such as diabetes glucose measures and pediatric immunizations, as well as patient satisfaction measures. The program only includes the hospital’s employed physicians, who constitute less than 10 percent of the physicians who provide services at the hospital. As a result, this financial incentive program does not align incentives between the health system and independent physicians who have privileges at the hospital. To incentivize quality improvement on a broader scale, hospital officials told us they were able to use another Stark law exception to implement a separate financial incentive program to include independent physicians. Specifically, because the health system had a health plan component, the health system was able to use the physician incentive plan exception in creating a financial incentive program for independent physicians to reward them for meeting a separate set of quality measures—the Healthcare Effectiveness Data Information Set (HEDIS). The physician incentive plan exception permits financial incentive plans that are administered and paid through health plans under certain conditions. Hospitals or health systems without a health plan component would have to design a financial incentive program to fit into other exceptions to include independent physicians. Additionally, operating multiple financial incentive programs covering different populations of physicians may create potential inefficiencies through redundancy or conflicting program objectives. In addition, most of the legal experts we spoke with told us that it is difficult for health systems to navigate the Stark law, and one legal expert told us that as a result health systems have terminated existing financial incentive programs or refrained from starting new programs. Some legal experts also told us that the requirements for complying with the Stark exceptions are difficult to apply when crafting financial incentive programs. In particular, they told us it is challenging to establish whether incentive payments meet the Stark fair market value exception, which in part requires that compensation be consistent with fair market value of services provided. One legal expert we spoke with noted that the Stark law’s fair market value exception potentially applies to payments from hospitals to physicians. For salary, the fair market value exception can be satisfied by using published surveys of wages to determine the fair market value of services provided. However, according to this legal expert, the exception becomes more difficult to apply when trying to determine the fair market value in connection with incentive payments, separate from compensation, for meeting performance goals. Specifically, some legal experts told us that the exception is unclear about how to measure the fair market value of services when those services involve meeting a clinically based outcome measure for a financial incentive program to improve quality. Additionally, it would be difficult to calculate the value of services not provided as a result of the physician providing higher quality care leading to better health outcomes. Some legal experts also told us that many of the Stark exceptions on which they rely require that compensation, including incentive payments from hospitals to physicians, not reflect the volume or value of referrals made by the physician. incentive programs may be structured so that incentive payments are distributed to all participating physicians without being directly related to any individual physician’s compliance with quality improvement criteria. Therefore, all participating physicians would receive the same payment without necessarily contributing the same level of effort. As a result, according to some of the legal experts we spoke with, an underperforming physician would not have an incentive to change his or her practices to improve the quality of care. E.g., 42 U.S.C. §§ 1395nn(e)(2)(ii), 1395nn(e)(3)(A)(v), 1395nn(e)(5)(B). Financial incentive programs limited to commercial patients may also implicate federal fraud and abuse laws. Some legal experts and health systems we spoke with told us it is difficult to separate commercial patients from Medicare patients for the purposes of financial incentive programs. Financial incentive programs limited to commercial patient populations may “spill over” to Medicare patients. For example, a financial incentive program that rewards quality improvement for commercial patient outcomes may influence how a participating physician treats Medicare patients. To protect themselves from Stark law and anti- kickback statute violations, health systems may structure their programs to fit into an exception or safe harbor in case a Medicare patient is inadvertently included in the program. For example, officials from a hospital system in a major urban area in the Midwest told us the hospital entered into a financial incentive program to share savings with a commercial payer for its commercial patient population only. These officials told us their program only includes employed physicians to further protect the providers from Stark law or anti-kickback statute violations if a Medicare patient is inadvertently included in the program. Financial incentive programs limited to commercial patients also might include Medicare patients in other ways. For example, a commercial insurer that used a hospital’s achievement of quality benchmarks could include the hospital’s Medicare patients in determining whether the benchmarks are met. In 2008, OIG issued a favorable advisory opinion in response to a request from a hospital seeking to implement a financial incentive program to reward physicians for meeting quality targets for commercial patients. The requester-hospital was participating in a pay- for-performance program with a private insurer, under which the hospital would be rewarded with a bonus payment for achieving quality targets based on health outcomes of all patients, including Medicare patients. The hospital stated that it needed to implement a financial incentive program with its physicians in order to achieve those quality targets and would reward physicians with a share of the bonus payment received from the private insurer. OIG determined that the program implicated the anti-kickback statute because the program relied on all hospital patient data, which included data for Medicare patients, instead of using only commercial patient data, to determine incentive payments for physicians.not to impose sanctions for this program. In its advisory opinion on the matter, however, OIG elected OIG officials told us that they did not take any Stark law or anti-kickback statute enforcement actions on the basis of providers’ implementation of pay-for-performance programs or gainsharing arrangements from fiscal years 2005 through 2010. Additionally, DOJ officials were unable to identify any DOJ FCA settlements involving the Stark law or anti-kickback statute that were based on the implementation of such programs during the same time period. However, some legal experts we spoke with told us that although there have not been any FCA cases or settlements, the threat of being the first case has created a chilling effect for providers. Some legal experts told us that as a result, their clients were conservative when implementing such programs. In addition to the Stark law and anti-kickback statute, hospitals must comply with the CMP law, which OIG interpreted in a 1999 Special Advisory Bulletin (SAB) as prohibiting payments from hospitals to physicians to induce a reduction or limitation in Medicare services for hospital patients, even if the services are not medically necessary.violation of the CMP law may result if the hospital knows that the payment may influence the physician to reduce or limit services, even if the payment is not tied to a specific patient or to an actual diminution in care. Any hospital financial incentive program that encourages physicians through payments, indirectly or directly, to reduce or limit clinical services violates the CMP law. Unlike the Stark law and anti-kickback statute, the CMP law does not have any statutory exceptions nor does it give OIG the authority to create regulatory exceptions. However, OIG has issued A advisory opinions effectively permitting certain financial incentive programs that would otherwise violate the CMP law. OIG considers the CMP law a reflection of congressional concern that payments from hospitals to physicians may result in stinting on care. In its SAB, OIG stated that the CMP law is intentionally broad, and noted in the SAB that the plain language of the statute does not limit its application to those services that are “medically necessary.” According to OIG officials, historically the CMP law developed as a patient quality of care law, not just a restriction on financial relationships. In addition, the SAB indicated that OIG’s interpretation of the CMP law was based, in part, on Congress’s inclusion of “medically necessary” in the law for MCOs. According to OIG officials, OIG interpreted the enactment of a separate law for MCOs to reflect the difference between MCOs and hospitals. They stated that MCOs, unlike hospitals, can more readily identify the patients participating in the network. OIG further reasoned that patients who enroll in an MCO understand that their physicians will have an economic incentive with respect to managing their care, and in return, patients share in any savings through increased benefits, such as reduced copayments and the addition of outpatient prescription drug coverage. By contrast, in OIG’s view, patients in traditional Medicare incur substantial additional financial obligations in exchange for access to physicians of their choice. According to OIG, hospitals may align incentives with physicians to achieve cost savings through means that do not violate the CMP law. For example, depending on the circumstances, an arrangement where the hospital pays the physicians a fixed fee that is fair market value for specific services rendered would compensate the physicians for their effort and not for a reduction or limitation in services. Achieving savings through actions that do not adversely affect the quality of patient care may require substantial effort on the part of the physicians. Depending on the circumstances, if the financial incentive program is based on the physician’s efforts rather than a percentage of cost savings, the program may not violate the CMP law. According to OIG officials, even if the program leads to a reduction or limitation of services, as long as the payment is not for the purpose of reducing services, the program would not violate the CMP law. For example, a hospital could pay a physician to complete his or her rounds by a specific time, which may result in patients being evaluated for discharge earlier. The payment is not tied to a reduction or limitation of services, but if patients are not hospitalized longer than necessary, this arrangement makes it possible for the hospital to be efficient and reduce costs. One legal expert and an industry group stakeholder we spoke with consider OIG’s interpretation of the CMP law overly broad—prohibiting payment from hospitals to physicians to induce the reduction or limitation of any service, regardless of medical necessity. In February 2009, an industry group stakeholder wrote to OIG contending that the agency should interpret the CMP law in the context of Medicare’s requirements that only medically necessary services are covered by the program. Since Medicare only covers medically necessary services, and the CMP law prohibits reduction or limitation of Medicare services, according to this stakeholder, the CMP law should be interpreted as prohibiting a reduction or limitation of medically necessary services. Some legal experts we spoke with and two industry group stakeholders consider the CMP law a major hurdle to the development and implementation of financial incentive programs that allow the hospital to reward physicians for lowering hospital costs and improving quality by reducing medically unnecessary services. Similarly, an industry group stakeholder, in a September 2010 statement to OIG, claimed that the CMP law constrains the development of financial incentive programs that would align hospital and physician incentives to provide more cost- effective care by, for example, encouraging more careful choice among available generic and brand name drugs or use of outpatient rather than inpatient services. This stakeholder noted that physicians are concerned that participation in such gainsharing arrangements exposes them to liability under the CMP law. Another industry group stakeholder, in a May 2008 statement, asserted that the CMP law has dissuaded providers from pursuing financial incentive programs using specific practice protocols, even those based on clinical evidence and recognized as best practices, because of provider concern that OIG might find that the program provided an incentive to reduce or limit services. Some legal experts told us that their health system clients have implemented financial incentive programs to reward quality, and they also include efficiency measures that could reduce or limit services but do not tie incentive payments to these measures to avoid implicating the CMP law. Although physicians are not rewarded for meeting these efficiency measures, their performance in meeting these benchmarks may be monitored and information may be shared with the physician as feedback, possibly providing a nonfinancial incentive to improve efficiency. For example, one legal expert described an arrangement between a hospital and its independent physicians to reward quality. The original goal of the program was to reduce the length of stay for patients. In addition to quality measures such as adhering to clinical protocols and meeting patient satisfaction benchmarks, the hospital wanted to include efficiency measures, such as standards for inpatient admission that could have limited admissions, but the physicians’ attorney was concerned that the program would violate the CMP law. Specifically, the attorney was concerned that including standards for inpatient admissions could lead to a reduction of services if, for example, a patient who did not meet these standards was denied admission to the hospital even if admission was not necessary. In response to these concerns, the hospital tied incentive Although the program retained the payments only to quality measures.efficiency measures, such as medically inappropriate days, these measures were tied to widely used clinical standards, no payment was tied to them, and they were used only to collect information on physician performance. In its 1999 SAB, OIG interpreted the CMP law to prohibit gainsharing arrangements in response to hospitals’ implementation of “black box” gainsharing arrangements in the 1990s. In OIG’s view, those gainsharing arrangements, in which physicians were paid for overall cost savings without the hospitals determining the specific actions the physicians took to generate the savings, posed a high risk of abuse. According to OIG, the black box gainsharing arrangements provided little accountability, insufficient safeguards against improper referral payments, and lacked objective performance measures to ensure that quality of care was not In various documents addressing the matter, OIG adversely affected.has noted its concern with the potential effect gainsharing has on the quality of care provided to Medicare patients. Specifically, OIG’s concerns include stinting on patient care, “cherry picking” healthy patients and steering sicker and more costly patients to hospitals that do not offer such arrangements, payments in exchange for patient referrals, and unfair competition among hospitals offering cost-sharing programs to foster physician loyalty and to attract more referrals. OIG has recognized, however, that certain gainsharing arrangements may reduce costs and improve quality without compromising care or rewarding referrals. Specifically, OIG has recognized that certain gainsharing arrangements, while potentially violating the CMP law and the anti-kickback statute, present a minimal risk of fraud and abuse that these laws were intended to address. On this basis, OIG has indicated that it would not subject specific arrangements approved in advisory opinions to sanctions. Through its advisory opinion process, OIG has evaluated certain gainsharing arrangements that could implicate the CMP law and anti-kickback statute. Since 2001, OIG has issued 14 advisory In these opinions, OIG opinions on specific gainsharing arrangements.concluded that the arrangements presented a low risk of abuse and that they would not, therefore, be subject to sanction. While OIG advisory opinions provide important guidance to providers about what may or may not be sanctioned by OIG, the opinions only address the anti-kickback statute and the CMP law. Because CMS, not OIG, has responsibility for interpreting the Stark law, OIG gainsharing opinions do not address the legality of these arrangements under the Stark law. CMS has not received any requests to issue advisory opinions on gainsharing arrangements, and therefore has not done so. In evaluating the risks posed by these gainsharing arrangements, OIG looked for measures that promote accountability, provide adequate quality controls, and protect against payments for referrals. The cost saving measures included in the approved gainsharing arrangements can generally be categorized as product standardization measures, product substitution, opening packaged items only when needed, or limiting the use of certain supplies or devices.features that, when taken together, OIG determined they provided sufficient safeguards to reduce the risk of program and patient abuse so that OIG would not seek sanctions against the health system for violation of the CMP law. These safeguards include specific cost-saving actions and resulting savings that are clearly and separately identified; credible medical support that implementation of the arrangement would not adversely affect patient care; payments that are based on all procedures and do not reflect the differences among individual patients’ insurance coverage; protection against inappropriate reductions in services by utilizing objective historical and clinical measures to establish baseline thresholds below which no savings accrue to the physicians; protections in the product standardization portion of the arrangement to further protect against inappropriate reductions in services by ensuring that individual physicians will still have available the same selection of devices after implementation of the arrangement as before; written disclosure provided to patients whose care may be affected by the arrangement and an opportunity for patients to review the cost savings recommendations prior to admission to the hospital; financial incentives that are reasonably limited in duration and amount; and profits that are distributed to the physicians on a per capita basis, mitigating any incentive for an individual physician to generate disproportionate cost savings. According to OIG, improperly designed or implemented arrangements could be vehicles to disguise payments for referrals. OIG found that the specific gainsharing arrangements evaluated in the advisory opinions could violate the anti-kickback statute, but the agency stated it would not impose sanctions for those arrangements because they included safeguards that reduced the likelihood that the arrangement would be used to attract referring physicians or to increase referrals from existing physicians. Due to the circumstances of the arrangements, as well as the included safeguards, OIG determined that the arrangements presented a low risk of fraud or abuse under the anti-kickback statute. Although the advisory opinions have focused on specific service lines, such as cardiac and orthopedic surgery, OIG officials stated that they are willing to evaluate gainsharing arrangements for other service lines. However, to date, OIG has not been asked to do so. In February 2009, one industry group stakeholder asked OIG in writing to withdraw the agency’s SAB that interpreted the CMP law as prohibiting gainsharing. The industry group asserted that the agency’s subsequent advisory opinions permitting implementation of certain gainsharing arrangements represent an “implicit acknowledgment that the experiences and context that gave rise to the 1999 Bulletin have changed significantly.” Specifically, according to this stakeholder, tools, such as the proliferation of quality measures, are now available to prevent financial incentives from causing harm to patients. However, according to OIG officials, although the health care delivery environment has changed since the CMP law was enacted, the payment systems that led to the enactment of the CMP law are still in use. Legal experts and stakeholders told us that multiple challenges are associated with implementing gainsharing arrangements since OIG issued its SAB, despite the availability of OIG’s advisory opinion process. Some legal experts told us they were reluctant to use the advisory opinion process because it is expensive and time-consuming. Some experts noted that, in their experience, legal expenses incurred in obtaining an advisory opinion ranged from $15,000 to over $50,000 depending on the complexity of the arrangement, in addition to other costs associated with developing arrangements. The financial incentive program expert we spoke with reported that it took over a year of review before OIG issued its first advisory opinion approving a novel gainsharing program. Some industry group stakeholders said that because the advisory opinions are only applicable to the requesting health system, other health systems cannot rely on the advisory opinions for assurance that OIG will not enforce the CMP law, even though OIG officials told us the agency did not take any enforcement actions against financial incentive programs for fiscal years 2005 through 2010. Health systems implementing gainsharing arrangements have structured their arrangements to be identical to those already approved, thereby lowering but not eliminating the overall risk that the arrangement would result in sanction for violating the CMP law. For example, we spoke with officials from a health system in the Northeast that is implementing a gainsharing arrangement with its orthopedics division. According to these officials, the health system is relying exclusively on the elements of previous OIG gainsharing advisory opinions to define the parameters of its gainsharing arrangement. Officials told us that they will not be pursuing areas for savings that OIG has not previously approved. However, even when implementing a gainsharing arrangement that has already been approved, legal experts told us there are challenges. Some legal experts told us that gainsharing arrangements permissible under OIG’s advisory opinions are narrow, and the approved gainsharing arrangements focus on certain procedural areas and include specific measures, such as limiting the use of certain surgical supplies and substitution of less costly items for those items currently used by the physicians. In addition, financial incentives to physicians must be distributed equally per capita regardless of the level of effort on the part of the physician. HHS has permitted implementation of certain financial incentive programs that otherwise might not be permitted under federal fraud and abuse laws, but it has required safeguards to protect program and patient integrity. CMS has conducted these programs through authorized demonstration projects, the Medicare Shared Savings Program, and the Innovation Center. These demonstration projects and programs are designed for specific types of providers and health systems, and some health systems may not be willing or eligible to participate. CMS has conducted demonstration projects to test financial incentive programs that include safeguards to protect program and patient integrity. For example, CMS, as authorized by the Deficit Reduction Act of 2005, designed the Medicare Hospital Gainsharing Demonstration to determine whether gainsharing arrangements could align incentives between hospitals and physicians to improve the quality and efficiency of care as well as hospital operation and financial performance. The demonstration project involved arrangements between hospitals and physicians under which the hospitals made gainsharing payments to physicians that were a share of the savings incurred directly as a result of collaborative efforts to improve overall quality and efficiency. CMS officials told us that this demonstration incorporated safeguards to protect program and patient integrity. Specifically, these safeguards included the requirement that providers meet quality thresholds by linking incentive payments to quality measures; that the financial incentive payment be limited to 25 percent of the amount normally paid for similar cases; and that payments not be based on the volume or value of referrals. CMS monitored physician referral and admission patterns throughout the demonstration to ensure that care provided to patients was not compromised. Although CMS has not completed its evaluation of this demonstration, officials told us they had not observed participants engaging in fraudulent behavior or become aware of harmful effects on patients. According to CMS officials, CMS has incorporated safeguards from previous demonstrations and MCOs in its rule for the Medicare Shared Savings Program, which allows ACOs to participate in a shared savings arrangement with the Medicare program. The Medicare Shared Savings Program is designed to pay providers on a fee-for-service basis, and will, at least in theory, help align incentives by sharing potential savings with providers that agree to meet quality and efficiency standards. According to CMS, the program incorporates the following broad categories of safeguards: quality measures; legal structure and governance requirements; patient-centeredness; monitoring; disclosure and transparency requirements; and program integrity screens. These safeguards are intended to protect patient and program integrity by ensuring that patient needs and experiences inform the delivery of care and ACO governance. An ACO’s continued participation in the Medicare Shared Savings Program is contingent on its performance. CMS has the authority to terminate an ACO’s participation in the program based on the agency’s findings. CMS and OIG have issued an interim final rule with comment period that establishes waivers of the fraud and abuse laws for the Medicare Shared Savings Program, including, among others, a shared savings distribution waiver. This waiver applies to distribution of shared savings from the ACO and within the ACO to ACO participants or ACO providers or suppliers. It also applies to the distribution of shared savings to providers outside the ACO but only for activities that are reasonably related to the purposes of the Medicare Shared Savings Program. In both cases, among other requirements, CMS and OIG require that the ACO does not limit or reduce medically necessary services.distribution of savings accrued during the period in which the ACO is The waiver covers the participating in the Medicare Shared Savings Program, even if those savings are distributed after this period. According to CMS and OIG, the waiver for the distribution of shared savings within the ACO is premised, in part, on recognition that an award of shared savings necessarily reflects the collective achievement by the ACO and its constituent parts of the quality, efficiency, and cost reduction goals of the Medicare Shared Savings Program. These goals are consistent with interests protected by the fraud and abuse laws. See Medicare Program; Pioneer Accountable Care Organization Model: Request for Applications, 76 Fed. Reg. 29,249 (May 20, 2011). models it tests through multiple mechanisms, including routinely analyzing data on service utilization, measuring beneficiary experience of care through surveys, and assessing beneficiary complaints. In the Pioneer ACO Model, CMS stated it will determine whether there are systematic differences in health status or other characteristics between patients who remain aligned with a given ACO over the life of the Pioneer ACO Model, and those who do not. ACOs that participate in the Pioneer ACO Model will also conduct surveys of their aligned beneficiaries on an annual basis, and according to CMS, the agency may investigate the practices of ACOs that generate beneficiary complaints. CMS stated it will also publicly report the performance of ACOs on quality metrics, including patient experience ratings, on its website. CMS and OIG recognize that properly structured financial incentive programs have the potential to improve quality and reduce costs but that improperly structured programs can disguise payments for referrals or adversely affect patient care. The federal fraud and abuse laws discussed in this report apply variously to financial relationships among hospitals, physicians, and health plans, among other entities. As a type of financial relationship, health systems must take these laws into account when structuring financial incentive programs. Health systems can implement certain types of financial incentive programs through, for example, various Stark law exceptions, anti-kickback safe harbors, or the agencies’ advisory opinion processes, although hospitals may not reward the limitation or reduction of services—even those services that are not medically necessary—without first obtaining OIG approval. Although health systems can implement certain types of financial incentive programs that may result in better patient health outcomes and lower health care costs, the challenges of implementing these programs within the current legal framework may, for some health systems, outweigh the potential benefits of doing so. As the stakeholders we spoke with reported, there are significant challenges to designing and implementing financial incentive programs through the available options. There are no exceptions and safe harbors specifically for financial incentive programs, and the Stark law’s “no risk” requirement for new exceptions, makes it difficult for CMS to craft an exception that allows for innovative, effective programs while ensuring that the Medicare program and patients face no risk from abuses. As such, the constraints of existing exceptions and safe harbors make it difficult to design and implement a comprehensive program for all participating physicians and patient populations. Furthermore, for some health systems, OIG’s interpretation of the CMP law constrains the development of financial incentive programs that would align hospital and physician incentives to provide more cost-effective care, and hospitals may be reluctant to pursue an advisory opinion because of the time, expense, and uncertainty involved. As a result, health systems are more likely to implement only those programs that mirror already approved programs or none at all. CMS’s various demonstrations, the Medicare Shared Savings Program, and programs implemented by the Innovation Center provide other opportunities for some health systems to implement these programs without the associated challenges of conforming to some of the federal fraud and abuse laws. The demonstrations, however, are time-limited and not all health systems are eligible or willing to participate. Under the Medicare Shared Savings Program, which is a permanent program, CMS and OIG will waive fraud and abuse laws for financial incentive programs under certain circumstances, but there may be limits on health systems’ ability to participate. Our work suggests that stakeholders’ concerns may hinder implementation of financial incentive programs to improve quality and efficiency on a broad scale. Different stakeholders—government agencies and health care providers—will likely continue to have differing perspectives about the optimal balance between innovative approaches to improve quality and lower costs and retaining appropriate patient and program safeguards. HHS provided written comments on a draft of this report, which are reprinted in appendix I. HHS and DOJ provided technical comments which we incorporated as appropriate. In its written comments, HHS sought to clarify the Department’s position on CMS’s use of its authorities to permit certain financial incentive programs—using regulatory exceptions and waivers—that the Department did not believe we had clearly described in the draft. Specifically, we had attributed the narrowness of the proposed 2008 Stark law exception to agency concern that financial incentive programs could be used to disguise payments for referrals or adversely affect patient care, as the agency had noted in the proposed rule. HHS clarified that the SSA requirement that Stark law exceptions pose “no risk of patient or program abuse” is a high standard that prevents the agency from balancing flexibility with beneficiary protection in creating exceptions. HHS commented that the narrowness of the proposed 2008 Stark law exception was dictated by this strict legal standard. HHS also commented that CMS has much greater authority in balancing flexibility with beneficiary protection under its waiver authority, and crafted much broader waivers when authorized to do so by the statutory authorities of the Medicare Shared Savings Program and Innovation Center. We modified the draft to reflect the agency’s position on this issue. In addition, HHS commented that our draft focused on the shared savings-only waiver, rather than the full scope of waivers that CMS and OIG determined were necessary for the success of the program. We highlighted the shared savings distribution waiver as an example of a waiver of the fraud and abuse laws that ACOs can use when distributing savings to providers and suppliers, and included a description of the additional waivers in a footnote, which we determined was sufficient detail for this report. HHS also commented that our discussion of the proposed 2008 Stark law exception does not include a discussion of the Medicare Shared Savings Program or Innovation Center waivers, which cover substantially the same gainsharing arrangements addressed in the proposed exception. We added a footnote addressing this issue but maintain that organizations that do not have programs under either the Medicare Shared Savings Program or Innovation Center are still required to comply with the Stark Law and its existing exceptions, which our stakeholders noted was challenging. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the date of the report. At that time we will send copies of the report to the Secretary of Health and Human Services and the U.S. Attorney General. This report also will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. In addition to the contact named above, Christine Brudevold, Assistant Director; Helen Desaulniers, Assistant General Counsel; Jasleen Modi; Elizabeth T. Morrison; Sarah Resavy; Lillian Shields; Hemi Tewarson; and Jennifer Whitworth made key contributions to this report. Medicare Physician Feedback Program: CMS Faces Challenges with Methodology and Distribution of Physician Reports. GAO-11-720. Washington, D.C.: August 12, 2011. Value in Health Care: Key Information for Policymakers to Assess Efforts to Improve Quality While Reducing Costs. GAO-11-445. Washington, D.C.: July 26, 2011. High-Risk Series: An Update. GAO-11-278. Washington, D.C.: February 2011. Medicare: Private Sector Initiatives to Bundle Hospital and Physician Payments for an Episode of Care. GAO-11-126R. Washington, D.C.: January 31, 2011. Medicare: Per Capita Method Can Be Used to Profile Physicians and Provide Feedback on Resource Use. GAO-09-802. Washington, D.C.: September 25, 2009. Medicare Physician Payment: Care Coordination Programs Used in Demonstration Show Promise, but Wider Use of Payment Approach May Be Limited. GAO-08-65. Washington, D.C.: February 15, 2008. Medicare: Focus on Physician Practice Patterns Can Lead to Greater Program Efficiency. GAO-07-307. Washington, D.C.: April 30, 2007. Medicare: Advisory Opinions as a Means of Clarifying Program Requirements. GAO-05-129. Washington, D.C.: December 8, 2004. Medicare: Referrals to Physician-Owned Imaging Facilities Warrant HCFA’s Scrutiny. GAO/HEHS-95-2. Washington, D.C.: October 20, 1994. Medicare: Physician Incentive Payments by Prepaid Health Plans Could Lower Quality of Care. GAO/HRD-89-29. Washington, D.C.: December 12, 1988. Medicare: Physician Incentive Payments by Hospitals Could Lead to Abuse. GAO/HRD-86-103. Washington, D.C.: July 22, 1986. | GAO has long expressed concern that increases in Medicare spending are unsustainable and do not necessarily enhance health care quality. Traditional Medicare provider payment systems reward the volume of services instead of the quality or efficiency of care by paying physicians for each service provided. Some health systems, which can be hospitals, physicians, health plans, or a combination, use financial incentive programs to reward physicians for improving quality and efficiency with the goal of better outcomes for patients and savings for hospitals and payers. Federal laws that protect patients and the integrity of federal programs, including Medicare, limit health systems ability to implement financial incentive programs. These fraud and abuse laws include the physician self-referral law, or Stark law; the anti-kickback statute; and the Civil Monetary Penalties (CMP) law. The Centers for Medicare & Medicaid Services (CMS) and the Office of Inspector General (OIG) within the Department of Health and Human Services (HHS), and the Department of Justice oversee and enforce these laws. GAO examined how federal fraud and abuse laws affect the implementation of financial incentive programs, stakeholders perspectives on their ability to implement these programs, and alternative approaches through which HHS has approved implementation of these programs. GAO analyzed relevant laws and agency guidance and documentation; and interviewed agency officials, legal experts, and provider stakeholders. Certain financial incentive programs are permitted within the framework of federal fraud and abuse laws, but stakeholders GAO spoke with reported that the laws, regulations, and agency guidance have created challenges for program design and implementation. The Stark law and anti-kickback statute, which restrict financial relationships among providers, have statutory and regulatory exceptions and safe harbors, respectively, that permit financial incentive programs that meet specific criteria. However, there are no exceptions or safe harbors specifically for financial incentive programs intended to improve quality and efficiency, and legal experts reported that the constraints of existing exceptions and safe harbors make it difficult to design and implement a comprehensive program for all participating physicians and patient populations. The CMP law prohibits hospitals from paying physicians to reduce or limit services, and OIG has interpreted the law to apply to the reduction or limitation of any services, whether or not those services are medically necessary. The CMP law does not include statutory exceptions to this prohibition, and OIG does not have the authority to create exceptions through regulation. Through its advisory opinion process, OIG, however, has indicated that it would not impose sanctions for specific financial incentive programs that otherwise violated the CMP law but presented a low risk of fraud and abuse. Legal experts stated that innovative arrangements are difficult to structure and that the advisory opinion process is burdensome. Through alternative approaches, HHS has approved implementation of otherwise prohibited financial incentive programs that incorporate safeguards, under its statutory authority to conduct demonstrations and other initiatives. Specifically, CMS has conducted demonstration projects to test financial incentive programs that reward quality and efficiency. These demonstration projects included safeguards, such as linking payments to quality measures, to protect program and patient integrity. CMS has incorporated safeguards into the Medicare Shared Savings Program, which allows eligible providers to participate as accountable care organizations to share savings with the Medicare program. As specifically authorized for the Medicare Shared Savings Program, CMS and OIG will waive fraud and abuse laws for, among other things, the distribution of shared savings in the Medicare Shared Savings Program, subject to certain requirements. The Center for Medicare and Medicaid Innovation within CMS is also implementing programs to test financial incentives. GAOs work suggests that stakeholders concerns may hinder implementation of financial incentive programs to improve quality and efficiency on a broad scale. Stakeholdersgovernment agencies and health care providerslikely will continue to have different perspectives about the optimal balance between innovative approaches to improve quality and lower costs and retaining appropriate patient safeguards. HHS reviewed a draft of this report and in its written comments, clarified its position on CMSs authorities to create exceptions and issue waivers to permit certain financial incentive programs, noting that its authority to issue waivers is broader than its authority to create Stark exceptions. We modified the draft to reflect the Departments position. |
Contracts of federal executive agencies that use appropriated funds are administered in accordance with laws, FAR, agency-specific FAR supplements, the Cost Accounting Standards (CAS), and the terms of the contract. HHS’ FAR supplement, the Health and Human Services Acquisition Regulations (HHSAR), contains additional requirements not found in the FAR, such as disallowing payments to contractors for independent research and development costs. The purpose of CAS is to help achieve uniformity and consistency in contractors’ cost accounting practices and provide rules for estimating, accumulating, and reporting costs under government contracts and subcontracts. For example, CAS requires certain contractors to prepare a disclosure statement that describes their accounting practices and requires that similar costs be treated in the same manner. Contractor compliance with CAS is monitored by a contractor’s cognizant federal agency. The cognizant federal agency is usually the agency with the largest dollar amount of negotiated contracts, including options, with the contractor. To help ensure continuity and ease of administration, FAR recommends that once an agency assumes cognizant federal agency responsibilities for a contractor, it generally retains cognizant status for at least 5 years. If, at the end of the 5-year period, another agency has the largest dollar amount of negotiated contracts including options, the two agencies coordinate and determine which one will assume the responsibilities. In addition to monitoring CAS compliance, the cognizant federal agency is responsible for determining if the contractor’s billing and accounting systems are adequate to record and bill costs in accordance with FAR. The cognizant federal agency also establishes provisional indirect cost rates based on an audit of information provided by the contractors that contractors use to estimate indirect costs on their invoices. The cognizant federal agency also establishes final indirect cost rates based on an audit of actual costs of the contractor during the year. The final indirect cost rates are used to adjust contractor billings (based on provisional indirect cost rates) for actual costs and may result in an additional cost or savings to the government. The final indirect cost rates established by the cognizant federal agency are utilized by agencies dealing with the contractor. Because other agencies rely on this cost information and oversight, it is particularly important that the cognizant federal agency fulfills its responsibilities. MMA significantly changed Medicare law covering CMS’s contracting for Medicare claims administration services. CMS refers to these changes, which are intended to improve service to beneficiaries and health care providers, as Medicare contracting reform. The implementation of contracting reform, which CMS is required to complete by October 2011, will fundamentally change Medicare claims administration contracting practices. Specifically, MMA requires CMS to use competitive procedures to select Medicare Administrative Contractors (formerly referred to as claims administration contractors) and to follow FAR except where specific MMA provisions differ. Prior to MMA, CMS was generally exempt from these requirements for its claims administration contractors. According to data provided by CMS’s Office of Acquisition and Grants Management (OAGM), during fiscal year 2006 CMS awarded contracts valued at about $3.8 billion. Of that amount, about half represented Medicare claims administration contracts that were not previously subject to FAR. The other half was already covered by FAR and is the category of contract primarily covered by this report. The contract life cycle includes many acquisition and administrative activities. Prior to award, an agency identifies a need; develops a requirements package; determines the method of contracting; solicits and evaluates bids or proposals; and ultimately awards a contract. After contract award, the agency performs contract administration and contract closeout. Contract administration involves the agency monitoring the contractor’s progress and processing payments to the contractor. The contract closeout process involves verification that the goods or services were provided and that administrative matters are completed. Also during contract closeout, a contract audit of costs billed to the government may be performed and the agency processes the final invoice with an adjustment for any over- or underpayments. Agencies may choose among different contract types to acquire goods and services. This choice is the principal means that agencies have for allocating risk between the government and the contractor. Contract types can be grouped into three broad categories: fixed price contracts, cost reimbursement contracts, and time and materials (T&M) contracts. As discussed below, these three types of contracts place different levels of risk on the government, which the government generally manages through oversight. For fixed price contracts, the government agrees to pay a set price for goods or services regardless of the actual cost to the contractor. A fixed price contract is ordinarily in the government’s interest when a sound basis for pricing exists as the contractor assumes the risk for cost overruns. Under cost reimbursement contracts, the government agrees to pay those costs of the contractor that are allowable, reasonable, and allocable to the contract. The government assumes most of the cost risk because the contractor is only required to provide its best effort to meet contract objectives within the estimated cost. If this cannot be done, the government would provide additional funds to complete the effort, fail to provide additional funds, or terminate the contract. The FAR requires agencies to mitigate risks through adequate government surveillance (oversight) during the performance of the contract. In addition, the contractor must have adequate accounting systems to record and bill costs. For T&M contracts, the government agrees to pay fixed per-hour labor rates and to reimburse other costs directly related to the contract, such as materials, equipment, or travel, based on cost. Like cost reimbursement contracts, the government assumes the cost risk because the contractor is only required to make a good faith effort to meet the government’s needs within a ceiling price. In addition, since these contracts provide no positive profit incentive for the contractor to control costs or use labor efficiently, the government must conduct appropriate surveillance of contractor performance to ensure efficient methods and effective cost controls are being used. At CMS, OAGM manages contracting activities and is responsible for, among other things, (1) developing policy and procedures for use by acquisition staff; (2) coordinating and conducting acquisition training; and (3) providing cost/price analyses and evaluations required for the review, negotiation, award, administration, and closeout of contracts. Multiple key players work together to monitor different aspects of contractor performance and execute preaward and postaward contract oversight. All but one of the players described below are centralized in OAGM. Project officers are assigned from CMS program offices. Contracting officers are responsible for ensuring performance of all necessary actions for effective contracting, overseeing contractor compliance with the terms of the contract, and safeguarding the interests of the government in its contractual relationships. The contracting officer is authorized to enter into, modify, and terminate contracts. Contracting specialists represent and assist the contracting officers with the contractor, but are generally not authorized to commit or bind the government. Additionally, the contracting specialist assists with the invoice review process. The cost/price team serves as an in-house consultant to others involved in the contracting process at CMS. By request, the team, which consists of four contract auditors, provides support for contract administration including reviewing cost proposals, consultations about the allowability of costs billed on invoices, and assistance during contract closeout. Project officers serve as the contracting officer’s technical representative designated to monitor the contractor’s progress, including the surveillance and assessment of performance and compliance with project objectives. The project officer also reviews invoices and conducts periodic analyses of contractor performance and cost data. Within HHS, its cognizant federal agency oversight responsibilities are divided between different agencies and offices. In 2002, HHS designated the National Institutes of Health (NIH) responsible for establishing provisional and final indirect cost rates when requested by other HHS agencies to perform such duties. Other responsibilities, such as monitoring a contractor’s compliance with CAS, belonged to the individual HHS agency or office, such as CMS, that primarily works with the contractor. Because certain cognizant federal agency responsibilities at HHS were assigned to CMS, we refer to CMS as the cognizant federal agency. At CMS, the cost/price team was assigned these other cognizant federal agency responsibilities. CMS could also pay another agency to assist it with the necessary oversight. For example, within the Department of Defense (DOD), the Defense Contract Audit Agency (DCAA) performs contract audits, including those required to fulfill DOD’s responsibilities as a cognizant federal agency. When requested and for a fee, DCAA will perform contract audits for other agencies. Congress appropriated to CMS $1 billion to fund start-up administrative costs to implement MMA provisions. CMS received $975 million, and Congress transferred the remaining $25 million to the HHS Office of the Inspector General (OIG) for oversight of the Part D program, including detecting and preventing fraud and abuse and the design and maintenance of a drug pricing database. CMS’s $975 million appropriation was available for obligation through September 2006. According to CMS financial data, CMS obligated $974.6 million and, from January 2004 through December 2006, expended over $908 million, of which about $735 million or 81 percent was paid to contractors and vendors for a variety of services. Payments were also made for services provided by other federal and state agencies, for CMS employee-related expenses, and for purchase card transactions. Figure 1 summarizes the amounts CMS paid to various recipients. CMS paid $735.4 million to over 250 different contractors and vendors. Of this amount, CMS paid about $521.2 million to 16 major contractors, $26.7 million to several Medicare contractors serving as fiscal intermediaries and carriers that administer Medicare benefits on behalf of CMS, and an additional $187 million to over 200 other contractors and vendors. Our assessment of CMS’s contracting practices and related internal controls was based primarily on specific controls over the contracts funded with MMA money for the 16 major contractors listed in table 1. Based on our analysis of contracts and invoices paid with MMA funds, figure 2 summarizes the types of activities provided by contractors and vendors such as information technology, the 1-800-MEDICARE help line, outreach/education, program support, and program integrity. Information technology: CMS paid $244.0 million for a variety of information technology services including new hardware and software, updates to existing systems, and the development of new systems. For example, CMS used MMA funds to modify its existing contract with CGI Federal (CGI) to update the system that handles Medicare claims appeals so that the system could also handle prescription drug claims. CMS also used MMA funds to modify its contract with Computer Sciences Corporation for the redesign of the beneficiary enrollment and payment system so that the system could also handle prescription drug beneficiaries. CMS also contracted with Iowa Foundation for Medical Care (IFMC) to develop a system to facilitate studies of chronic condition care, as specifically required by MMA. 1-800-MEDICARE help line: CMS paid $234.4 million for the operation of the 1-800-MEDICARE help line, a CMS-administered help line used to answer beneficiaries’ questions about Medicare eligibility, enrollment, and benefits. Because the help line’s call volume significantly increased with the anticipation of the new prescription drug benefit, CMS used MMA funds to expand help line operations and fund a portion of help line costs. CMS contracted with both NCS Pearson (Pearson) and Palmetto GBA (Palmetto) for help line operations. Outreach/education: CMS paid $98.9 million for a variety of outreach and education activities, including $67.3 million to inform beneficiaries and their caregivers about the changes to Medicare benefits and $31.6 million to meet the information and education needs of Medicare providers. For example, CMS paid Ketchum, a public relations and marketing firm, $47.3 million to provide outreach and education to the public. Ketchum assisted with a number of initiatives, including a nationwide bus tour, which traveled to targeted cities across America to promote key messages regarding Medicare prescription drug coverage. To further the television advertising campaign, Ketchum facilitated a number of media buys (the buying of advertising space) for commercials to inform the public about the new prescription drug benefit. CMS paid $31.6 million to Medicare contractors serving as fiscal intermediaries and carriers that administer Medicare benefits on behalf of CMS. These contractors, such as Blue Cross Blue Shield, assisted with provider customer service as required by MMA to meet the information and education needs of providers. Program support: CMS paid $61.4 million for program support activities to assist with the implementation of the changes to the Medicare program. For example, CMS contracted with Booz Allen Hamilton (BAH) to perform an analysis of the prescription drug industry, review MMA legislative requirements, and develop application requirements for the prescription drug plans. CMS also contracted with BAH to support the development of the statements of work for the 1- 800-MEDICARE help line contracts, including assisting CMS with monitoring and oversight of the contracts. Program integrity: CMS paid $14.3 million for program integrity (antifraud and abuse) activities. For example, CMS paid one contractor $810,000 to assist CMS as one of the Medicare Drug Integrity Contractors. These contractors assist CMS in antifraud and abuse efforts related to the prescription drug benefits. Other examples of program integrity activities include oversight of the prescription drug card and coordination of benefit payments to prevent mistaken payment of Medicare claims. In addition to the $735.4 million that CMS paid to contractors and vendors, based upon information in CMS’s disbursement data and descriptions in interagency agreements and on invoices, we determined that CMS also made payments to other federal agencies, for employee-related costs, to state agencies, and for purchase card transactions. Payments to federal agencies: CMS paid $105.0 million to other federal agencies. These payments included $27.5 million to the U.S. Postal Service for mailing services; $26.2 million to the Government Printing Office for printing services; $5.8 million to the Office of Personnel Management for various services, including the development of training courses; and about $19 million to other HHS divisions for human resources, legal, and other services. CMS also paid about $24 million to the General Services Administration (GSA) for services including telephone and network services, building renovations, and renovating a leased facility to include a new training center and additional office space. Payments for CMS employee-related costs: CMS paid $42.1 million for employee-related costs, including $38.2 million for payroll costs and $3.9 million for travel costs. The payroll costs covered about 500 new employees hired in response to MMA and did not include payroll costs for existing CMS employees working on MMA. While these new employees were hired to work in divisions throughout CMS and in various regions of the country, the largest group of employees, 174, was hired to work in CMS’s Center for Beneficiary Choices, which is responsible for operations related to the prescription drug plans. Payments to state agencies: CMS paid $23.8 million to state agencies as grants under the State Health Insurance Assistance Program. Under the program (which operates in all 50 states, the District of Columbia, the Virgin Islands, Puerto Rico, and Guam) the agencies provide advisory services to Medicare-eligible individuals and their caregivers. CMS relied on these state agencies to play a significant role in providing counseling and education services on the changes to Medicare, including the new prescription drug benefit. Payments using purchase cards: CMS paid $2.0 million using purchase cards to acquire office supplies, outreach materials, and information technology equipment. An example of outreach materials was $148,391 that CMS paid for 25,000 paperweights to be distributed at MMA outreach events, such as during the nationwide bus tour. CMS also made a number of audio and video equipment purchases for its television studio. Purchase cards were also used to pay for training such as training for MMA new hires, computer training, and preretirement training. The CMS operating environment created vulnerabilities in the contracting process and increased the risk of waste and improper payments. Over the past several years, resources allocated to contract oversight at CMS have not kept pace with the dramatic increase in contract awards. Additionally, CMS did not allocate adequate funding for contract audits and other contractor oversight activities essential to effectively fulfilling its critical cognizant federal agency responsibilities. Further, risks in CMS’s contracting practices made CMS vulnerable to waste. For example, CMS did not always benefit from the effects of competition when awarding contracts. In addition, CMS frequently used a contract type—cost reimbursement—under which the government assumes most of the cost risk. In some cases, this contract type was used by CMS contrary to FAR requirements. In addition, CMS’s approval of certain subcontractor agreements may have increased the costs to obtain services. CMS often applied flawed procedures to review and approve invoices. The flawed procedures were caused, in part, by pervasive internal control deficiencies, such as a lack of policies and procedures that provide sufficient guidance for reviewing invoices and that require adequate supporting documentation for invoices that would enable a review. Additionally, CMS did not sufficiently train its key staff in appropriate invoice review techniques, including identifying risks to the government based on contract type. Further, CMS’s payment process, called negative certification, did not provide incentive for staff to review invoices, as payments would be made without a certification of review. Finally, CMS did not closeout contracts within time frames set by FAR. With only one OAGM contracting officer tasked with closing contracts, CMS has accumulated approximately 1,300 contracts with a total contract value of about $3 billion needing closeout as of September 30, 2007. Over the past several years, CMS resources allocated to contract oversight have not kept pace with CMS’s increase in contract awards. Additionally, CMS did not allocate sufficient funding for contract audits and other critical contractor oversight activities to fulfill its cognizant federal agency responsibilities. These contractor oversight responsibilities include establishing indirect cost rates with the contractor and verifying that the contractor has the necessary systems and processes in place to accurately bill the government. Moreover, risks in certain contracting practices related to noncompetitive contracts, cost reimbursement contracts, and subcontractor agreements made CMS vulnerable to waste. When an organization places sufficient emphasis on accountability or dedicates sufficient management attention to systemic problems, it reduces risk and potential vulnerabilities in operating activities. An organization’s control environment, that is, management’s overall approach toward oversight and accountability including a supportive attitude towards internal control, provides discipline and structure that influences the way the agency conducts its business. As stated in GAO’s standards for internal control, a strong control environment is the foundation for all other elements of internal control. From fiscal year 1997 to 2006, as shown in figure 3, CMS contracting has dramatically increased; however, contract oversight resources have remained fairly constant. Specifically, contract awards have increased from about $1.9 billion in 1997 to about $3.8 billion in 2006, an increase of 103 percent, while oversight resources increased from 79 full time equivalents (FTE) in 1997 to 88 in 2006, an increase of about 11 percent. This trend presents a major challenge to contracting award and administration personnel who must deal with a significantly increased workload without additional support and resources. As the cognizant federal agency, CMS was responsible for ensuring that certain critical contractor oversight was performed, including establishing provisional and final indirect cost rates, assessing the adequacy of accounting systems, and monitoring compliance with CAS. CMS did not have sufficient procedures in place to ensure its cognizant federal agency responsibilities were fulfilled, to readily know the contractors it was responsible for as the cognizant federal agency, or to readily know which contractors were subject to CAS, which would require additional oversight to be performed. We requested a listing of contractors for which CMS was the cognizant federal agency to determine whether the oversight activities were performed for the contractors in our review. However, because of missing and conflicting data in the information provided by CMS, we independently examined the contract files and spoke with contractors, NIH, DCAA, and CMS officials to determine that at the end of fiscal year 2006, CMS was the cognizant federal agency for 8 of the 16 contractors in our review. The contracts in our review for these 8 contractors had a total value of nearly $1 billion as of August 2007. As shown in table 2, we found that CMS did not ensure that critical cognizant federal agency duties were performed or that those duties were only partially or insufficiently performed. Table 2 also shows that CMS did not fully ensure that its cognizant federal agency duties were completely performed for any of the 8 contractors. We found that the listings CMS provided of the contractors for which it was the cognizant federal agency and other contractors were not complete or accurate. CMS provided us with two listings, one prepared in 2005 and another prepared in 2007. The 2005 listing included data fields to record the applicable cognizant federal agency and the status of the cognizant federal agency responsibilities listed in table 2. However, this listing was missing key information for several contractors. For example, there was no information regarding the cognizant federal agency for Ketchum or the status of the cognizant federal agency responsibilities. The 2007 listing included a data field to record the applicable cognizant federal agency, but did not have data fields to record the status of cognizant federal agency responsibilities. In addition, the listings did not clearly or consistently identify whether CMS was the cognizant federal agency. For example, in the 2005 listing, CMS was identified as the cognizant federal agency for IFMC; however, IFMC was not included in the 2007 listing. Subsequently, we verified with CMS officials that CMS was still the cognizant federal agency for IFMC but it was inadvertently excluded from the 2007 listing. The CAS states that agencies shall establish internal policies and procedures to govern how to monitor contractors’ CAS compliance, with a particular emphasis on interagency coordination activities. CMS did not have agency-specific policies and procedures in place to help ensure that its cognizant federal agency responsibilities were properly performed, including the monitoring of contractors’ CAS compliance. Of the eight contractors in our review, for which CMS was the cognizant federal agency, seven were subject to CAS at the end of fiscal year 2006. Generally, CMS requested DCAA to perform audit work for some of its cognizant federal agency duties. Further, for HHS, NIH was the agency assigned responsibility for auditing provisional and final indirect rates. However, NIH would not know this work is needed, unless CMS makes a request. In January 2007, one contractor sent a letter to CMS indicating that while CMS had performed some of the cognizant federal agency functions “on an ad hoc basis over the past year,” the contractor wanted “to have a more formal relationship in place.” The contractor noted that until its indirect cost rates are audited and finalized, it will be “unable to submit final closeout invoices on cost reimbursable work.” Because other agencies rely on the work performed by cognizant federal agencies in their own contracting activities, CMS’s failure to ensure its cognizant federal agency responsibilities were fulfilled not only increased risks to CMS, but also to other federal agencies that use the same contractors. For example, we noted that according to one contractor’s audited financial statements, as of December 31, 2005, the contractor reported a liability of about $3.8 million for billing the government more than its actual costs, including about $2.8 million associated with CMS contracts and $1.0 million related to a DOD contract. At the time of our review, CMS, as the contractor’s cognizant federal agency, had not established its final indirect cost rates for years after 2004, which would be necessary for CMS and DOD to collect the overbilled amounts. CMS officials and cost/price team members attributed their limited ability to request contract audits—those required by FAR to fulfill cognizant federal agency responsibilities and for the contract closeout process—to the lack of sufficient allocation of funds for these efforts. For example, OAGM provided us with documentation that it requested from CMS management about $1.2 million for fiscal year 2005 and about $3.5 million for fiscal year 2006 to pay for proposal evaluations, accounting system reviews, and disclosure statement reviews to help CMS comply with FAR requirements. Despite these requests, OAGM was provided $30,000 in fiscal year 2005 and $18,320 in fiscal year 2006. Moreover, no funds were provided for this purpose in fiscal year 2007. Consistent with this, the cost/price team indicated that contract audits often “fall by the way-side” since its resources are limited. Not funding contract audits may limit CMS’s ability to closeout contracts, as well as to detect and recover improper payments. Further, based on our review of payments to contractors, the contractors that we identified as having more questionable payments were contractors for which CMS was the cognizant federal agency. Contracting and procurement has been identified as an area that poses significant challenges across the federal government. Our work and that of agency inspectors general has found systemic weaknesses in key areas of acquisition that put agencies at risk for waste and mismanagement. At CMS we found risks resulting from CMS’s failure to allocate sufficient resources for effective contract and contractor oversight, and we found that CMS engaged in certain contracting practices that made the agency more vulnerable to waste. For example, CMS did not always take advantage of the benefits of competition and frequently used a contract type—cost reimbursement—that by nature poses more risk to the government because the government assumes most of the cost risk. In addition, CMS approved some subcontractor agreements that may have unnecessarily increased the costs of obtaining those services. We also noted that, when awarding contracts, contracting officers did not always follow advice from others such as the cost/price team and HHS Office of General Counsel that could have mitigated some of these risks. CMS is generally required to obtain competition for the goods and services it procures. The FAR provides procedures for making price determinations and emphasizes the use of full and open competition in the acquisition process. Because a competitive environment generally provides more assurance of reasonable prices than a noncompetitive one, CMS is exposed to contracting vulnerabilities and potential waste due to practices that limit competition. About 45 percent of the contracts included in our review (representing about $499.1 million in total contract value) were awarded without the benefit of competition. According to CMS, noncompetitive procedures were used on the contracts in our review because (1) there was an unusual or compelling urgency for the work, (2), the award was made under the Small Business Administration (SBA) 8(a) criteria, or (3) the contracted activities were considered to be a logical follow-on to prior work. While these are permissible reasons to limit competition, in the examples of the noncompetitive contracts described below, CMS’s contracting practices may not have sufficiently protected the government’s interest in obtaining the best value, in terms of fair and reasonable prices. The FAR allows for noncompetitive procedures when there is an unusual and compelling urgency that the government would be seriously injured unless competition is limited. When this exemption is used, an agency prepares a written justification and requests offers from as many potential sources as is practicable. Prior to a noncompetitive award to Maximus ultimately valued at about $6.5 million, the HHS Office of General Counsel reviewed CMS’s justification for other than full and open competition and had concerns with the legal sufficiency of the justification. The Office found that CMS did not demonstrate how it had met the FAR requirement to obtain offers from as many sources as possible or how the agency would be seriously injured if the exemption is not used. Additionally, according to the Office of General Counsel, the urgent and compelling justification did not support procurements in excess of a “minimum amount of time,” and suggested limiting the contract to a 5-month term and recompeting the contract during that time. Despite the advice of the Office of General Counsel, 2 days later CMS awarded the contract to Maximus for a 9-month period, never recompeted the contract, and eventually extended the period of performance another 17 months for a total of 26 months. For multiple awards to Z-Tech, CMS justified the sole-source noncompetitive awards using SBA’s 8(a) exceptions to competition subject to contract value thresholds. To use these exceptions, generally an agency obtains a written authorization from SBA, which places a limit on the dollar value of the contract. For one Z-Tech contract, CMS obtained authorization to award a contract for an amount up to $3.6 million. SBA also indicated that no other increases would be authorized under this contract and that further increases should be competed under a new contract. Nevertheless, CMS exceeded the SBA-authorized amounts and made awards to Z-Tech totaling about $4.4 million. Further, we found an agency internal document in a contract file that expressed concern that contract awards to Z-Tech may have been divided to avoid the dollar threshold that would require competition for 8(a) procurements. The FAR allows for limiting competition on the issuance of task orders under multiple award contracts if doing so is in the interest of economy and efficiency because it is a logical follow-on to an earlier task order that had been subject to competition. However, the frequent use of the logical follow-on exemption to competition may hinder an agency’s ability to obtain the best value for the taxpayer. About 24 percent of the contracts and task orders in our review, with a total value of nearly $390 million, were issued with no competition as a logical follow-on to a prior task order. Two of these logical follow-on task orders had total values of $234.6 million and $67.8 million. One role of the contracting officer is to select the contract type that is in the best interest of the government, places reasonable risk on the contractor, and provides the contractor with the greatest incentive for efficient and economical performance. Cost reimbursement contracts are suitable for use only when uncertainties involved in contract performance do not permit costs to be estimated with sufficient accuracy to use any type of fixed-price contract. We found that about 78 percent of the contracts we reviewed were cost reimbursement contracts. These cost reimbursement contracts had a total contract value of $1.2 billion. Some CMS officials told us that CMS was a “cost-type shop,” meaning that at CMS they prefer cost reimbursement contracts. When cost reimbursement contracts are utilized, FAR requires additional procedures to mitigate the increased risk such as adequate government surveillance. However, as discussed later in this report, CMS did not implement sufficient oversight required for cost reimbursement contracts. In addition, before awarding a cost reimbursement contract, the contracting officer is required by FAR to verify that the contractor has an adequate accounting system for determining costs applicable to the contract, which helps provide the government assurance that the contractor has systems in place to accurately and consistently record and bill costs in accordance with FAR. During our review of CMS’s contract files, we found that contracting officers did not always proactively ensure the adequacy of contractors’ accounting systems prior to award of the cost reimbursement contracts. We also noted instances when CMS knowingly awarded cost reimbursement contracts to a contractor with a deficient accounting system, contrary to the FAR requirement. Specifically, the CMS cost/price team noted numerous significant deficiencies in how Palmetto accounted for costs and determined that Palmetto’s accounting system could not adequately account for its direct labor and indirect costs. The cost/price team notified the contracting specialist of the accounting system deficiencies and also stated that “corrections to system cannot be completed by the time this contract is awarded.” Despite this determination by the cost/price team, the contracting officer awarded two cost reimbursement contracts included in our review to Palmetto with a total contract value of $157.3 million. Further, the contracting officer awarded a third contract valued at $3.3 million to Palmetto without verifying whether or not Palmetto’s accounting system deficiencies were resolved. CMS also encouraged a contractor to use a cost reimbursement contract, even though the cost/price team raised concerns regarding the contractor’s proposal of certain costs as direct costs and the contractor’s ability to accumulate and record direct and indirect costs. Despite these concerns, CMS did not inquire with DCAA about whether or not an accounting system audit had been performed until after the contract was awarded. CMS eventually requested an accounting system audit about a year and a half after contract award. Further, the contractor expressed concerns regarding the cost reimbursement contract type requested by CMS because it did not have prior experience with the contract type. CMS documented in the contract file that “after much deliberation, the contractor realized it was in best interest to accept a [cost reimbursement] contract.” In some instances, contractors’ inadequate accounting systems inhibited our ability to audit costs billed to the government because the contractors were unable to substantiate the costs billed. While it is not inappropriate for a prime contractor to use subcontractors to achieve the contract’s objectives, CMS’s approval of some subcontractor agreements may have increased the cost to obtain the services through additional indirect costs and fees. For the contracts we reviewed, several of the prime contractors subcontracted for significant volumes of work. For example, on one task order between February 2004 and February 2005, Ketchum billed about $34.7 million of which about $33.8 million, or 97 percent, was for subcontractor costs. Furthermore, about $32.3 million of these costs were related to a single subcontractor. During this same period Ketchum billed only $59,509 for direct labor (which would include Ketchum’s oversight of the subcontractors) yet received about $694,000 in fees, or over 10 times more than the direct labor Ketchum provided under the contract. The contracts for the operation of the 1-800-MEDICARE help line are another example of cost increases caused by subcontractor agreements. CMS hired two contractors to operate the help line—Pearson and Palmetto. While each contractor had its own contract with CMS that required them to provide similar services, Pearson and Palmetto subsequently subcontracted with each other, again for the same services. Consequently, the costs to operate the help line were increased through additional indirect costs and fees. Specifically, CMS paid Palmetto an additional $3.6 million (for indirect costs and fees applied to the Pearson services included with Palmetto’s invoices) that may not have been paid absent the subcontract agreement, such as if Pearson provided the services under its own prime contract. In addition, CMS paid Pearson an additional $630,000 in fees that may not have been paid absent the subcontract agreement. In addition to increased risks associated with CMS’s operating environment and certain contracting practices, pervasive internal control deficiencies in its invoice review and approval process increased the risk of improper payments. These deficiencies were caused in part by inadequate policies and procedures for invoice review and insufficient training of key personnel. CMS also did not perform timely contract closeout procedures, including contract audits to determine the allowability of billed amounts. GAO’s standards for internal control state that control activities are the policies, procedures, and mechanisms that address risk and are an integral part of an organization’s stewardship of government resources. Effective controls are even more important given CMS’s risks and vulnerabilities in the contracting process caused by its operating environment. Effective policies and procedures for reviewing and approving contractor invoices help to ensure that goods and services were actually received and amounts billed represent allowable costs, and are comprised of numerous control activities. At CMS, the project officer’s role is to review the invoices for technical compliance and accuracy of quantities billed whereas the contracting specialists’ role is to determine if the amounts billed comply with contract terms such as indirect cost rates or ceiling amounts. We found that CMS often used flawed procedures to review and approve contractor invoices. These flawed procedures were caused, in part, by a lack of specific guidance and procedures for the contracting officials to follow as well as insufficient training. Inadequate policies and procedures over invoice review: CMS’s policies and procedures did not provide adequate details on how to review invoice cost elements. For example, CMS’s acquisition policy for invoice payment procedures simply states that “the project officer shall certify whether or not the invoice is approved for payment” and “the contracting specialist will review the invoice and (the project officer’s certification).” The policy did not give specific instructions or guidance on how to review an invoice or which invoice elements receive the most review given the nature of the services provided or the contract type. Lack of requirements for invoice detail: CMS did not have requirements for contracting officers to ensure that contractors provide a certain level of detail supporting their invoices to allow responsible CMS personnel to sufficiently review key elements. As a result, CMS often did not require contractors to provide adequate detail in invoices to review billed costs, such as labor charges or travel. For example, some contractors included only lump sum amounts showing the number of hours worked and the associated dollar amount for labor costs but did not provide a list of hours worked by employee or respective labor rates. Without this information, it was not possible for CMS to verify whether the amounts billed corresponded to employees who actually worked on the project. One contractor stated that CMS requested only lump sum amounts for travel with no detailed information or travel receipts. Without this information, CMS could not verify that travel costs were related to the contract or were in accordance with FAR requirements. Insufficient training: CMS did not sufficiently train staff on how to adequately review invoices, such as identification of risks to the government based on contract type and how to verify labor rates or hours worked. As a result, project officers and contracting specialists were not always aware of their invoice review responsibilities. Some project officers told us that they had only received training “on-the- job.” Further, several staff we interviewed referred to the Project Officer Handbook as a source for guidance on the project officer’s responsibilities. We reviewed this handbook and found that it did not provide any practical guidance on how to review invoices and focused more on the acquisition process (i.e., developing statements of work and preparing acquisition planning documents). In addition, two contracting officers said they attended a 2-hour training sponsored by CMS’s Office of Financial Management (OFM) and that it was helpful in providing guidance on how to review invoices. We also reviewed this training material and found that the training did not sufficiently cover invoice review procedures. The training materials included one slide that indicated that it was the project officers’ responsibility to review invoices, but it did not provide specific examples of invoice review procedures. An OFM official told us that the training was intended to provide detailed guidance on budgeting and appropriation procedures and not invoice review. Lack of incentive to review invoices: CMS uses a payment process—negative certification—whereby OFM paid contractor and vendor invoices without knowing whether or not such invoices were reviewed and certified. Negative certification is used, in part, to help the agency meet Prompt Payment Act requirements. However, this process is the default for all invoice payments regardless of factors that may increase risk to the agency, including contract type or prior billing problems with the contractor. By contrast, DOD allows for contractors to participate in direct billing, a process similar to negative certification, only if the contractors meet certain criteria such as adequate accounting systems, billing rates established based upon recent reviews, and timely submissions of cost information as required by FAR. CMS’s negative certification process provides little incentive for personnel to perform timely reviews of invoices or for reviews to even take place. In our review of contract files, we found that certificates of review by the project officer were not always included in the contract files, and when the certificates were included in the file, they generally did not include evidence to document the review, such as tickmarks or notes, and they were not always signed. Without sufficient policies and procedures, training, and incentives to review invoices, we found that key staff often used flawed procedures. Contracting officers, specialists, and project officers told us they reviewed invoice costs, such as labor rates for cost reimbursement contracts, based on amounts proposed by the contractor prior to award. However, this practice has little value for cost reimbursement contracts because FAR calls for the payment of actual allowable costs, rather than costs proposed prior to performance of the contract. Contracting specialists and project officers also told us they reviewed invoices by comparing current invoices to prior months and to burn rates (the rate at which CMS is expending dollars that are obligated to the contract). This procedure provides no assurance that the amounts billed are allowable. Additionally, several project officers told us that they compared invoices to monthly reports prepared by the contractors. This procedure has limited value because it does not involve verifying amounts billed to source documents, such as time sheets, payroll registers, or vendor invoices. Also, when we reviewed the monthly reports, we noted that the reports were not always reconcilable to the invoices, which would hinder the project officer’s ability to use the monthly reports in determining the validity of the billed amounts. As described later in this report, we found payments for potentially unallowable costs that could have been identified had proper invoice review procedures been in place. Further, contracting and project officers did not call for additional oversight procedures when they approved complex subcontractor arrangements such as when a contractor provides the same services as both a prime contractor and as a subcontractor to another contractor. When these types of relationships exist, improper payments or double- billings may go undetected if a contractor bills the same services on both its prime contract invoices (which are reviewed by the government) and its subcontract invoices (which are reviewed by the other prime contractor). Further, some officials indicated that they relied on contract audits rather than invoice review procedures to catch improper payments. One contracting officer stated that it was not the contracting officer’s or specialist’s responsibility to review invoices for fraudulent billings, such as double-billings, because such billings would only be found during a closeout audit. While an audit during the closeout process may provide a detective control to identify improper payments after they were made, timely invoice review procedures provide the necessary preventive controls to help ensure that improper payments are not made and would allow CMS to take corrective actions, if necessary. For example, it would be more effective to review the accuracy of labor billings while the contractor is still performing services rather than after the fact during the closeout process, which may be several years later. CMS did not perform its contract closeout procedures in accordance with FAR time frames, and until recently, did not have contract closeout policies. The FAR requires agencies to closeout a contract after the work is physically completed (i.e., goods or services are provided). The closeout process is an important internal control, in part, because it is generally the last opportunity for the government to detect and recover any improper payments. The closeout process includes verifying that administrative matters are completed, adjusting provisional indirect cost rates for actual final indirect cost rates, performing a contract audit of costs billed to the government, and making final payments. The complexity and length of the process can vary with the extent of oversight performed by the agency and the contract type. The FAR generally calls for fixed price contracts to be closed within 6 months; contracts requiring the settlement of indirect costs rates, such as cost reimbursement contracts, to be closed within 36 months; and all other contracts to be closed within 20 months. These time frames begin in the month in which the contracting official receives evidence of physical completion of the contract. According to information provided by OAGM management, as of September 30, 2007, CMS’s contract closeout backlog was approximately 1,300 contracts with a total contract value of approximately $3 billion. The backlog report indicated that 407 contract closeouts were overdue according to FAR timing requirements. Currently, CMS has only one contracting officer responsible for the closeout process. Several of the contracts on the backlog list completed contract performance as far back as 1999. CMS established agency-specific contract closeout policies in February 2007. One CMS official stated that prior to the closeout policies, some contracting officials and specialists often passed on contract files to the closeout staff before compiling all required documentation. Because of this, the sole staff member responsible for CMS’s contract closeout procedures has to spend time tracking down required documents rather than performing actual closeout procedures. A key element of the closeout process is the contract audit of costs billed to the government. This audit is used to verify that the contractor’s billed costs were allowable, reasonable, and allocable, which is critical for a cost reimbursement contract. This audit is even more important at CMS because of CMS’s dependence on cost reimbursement contracts and the reliance placed on the contract audits instead of invoice review procedures. As previously mentioned, CMS has not allocated sufficient resources to ensure contract audits take place. As a result, CMS has limited its ability to detect and recover improper payments from contractors. Because of the risks in CMS’s contracting practices and pervasive internal control deficiencies, CMS was highly vulnerable to waste and improper payments. Due to this increased risk, we selected contractor transactions to test and found nearly $90 million of payments to contractors that we questioned because the payments were potentially improper, unsubstantiated, or wasteful. Potentially improper payments include payments for costs that did not comply with the terms of the contract or applicable regulation. Unsubstantiated payments are related to costs that were not adequately supported. Wasteful payments are those for which risks in CMS’s contracting practices may have resulted in CMS not obtaining the best value. In some cases, a portion of the questionable payment most likely relates to allowable costs, but due to the facts and circumstances involved, we were unable to determine whether or to what extent the costs were allowable, reasonable, and allocable. As a result, some portion of the total amount of questionable payments we identified ultimately may be determined by CMS to be allowable and therefore not recoverable from the contractor. Table 3 summarizes the questionable payments we identified. Appendix I provides a summary by contractor of the questionable payments we identified. Because CMS sometimes used other funding sources in addition to MMA to pay invoices for one contract, we were not always able to identify specific costs that were paid with MMA funds. As a result, the scope of our review extended beyond payments made with MMA funds for some contracts and the amount of questionable payments we identified may not have been paid solely with MMA funds. Given CMS’s poor control environment and the fact that our work was not designed to identify all questionable payments made by CMS or to estimate their extent, CMS may have made other questionable payments. Appendix II provides details on the amounts by contractor that we reviewed and the amounts paid with MMA funds. Contracts contain the terms and provisions that set the parameters for allowable costs and the necessary documentation required to support the contractor’s billings. For example, contracts may set ceiling limits on the amount of indirect costs a contractor may bill or the amount a contractor may bill for subcontractor costs. Additionally, contracts also incorporate numerous FAR provisions that the contracting officer determines to be applicable to the contract that may require the contractor to follow CAS or may restrict the contractor’s travel costs. The contractor is required to bill the government in accordance with the terms of the contract and, as part of its invoice review and approval process, the government’s responsibility is to ensure that billings comply with those terms. We identified numerous questionable payments totaling about $24.5 million that represent potentially improper payments for contractor costs not compliant with the terms of the contract or applicable regulation. Labor categories outside the terms of the contract – $1.7 million: CMS paid CGI, BAH, International Business Machines (IBM), and IFMC for labor categories which were not specifically listed in the terms of the task orders. For example, CGI’s task order specified “should the contractor wish to utilize additional GSA IT labor categories…prior CMS approval must be obtained.” CGI did not seek and CMS did not give approval for the use of four labor categories, totaling about $1.3 million. Also, CMS paid BAH about $208,000 for labor categories that were not specifically listed in the terms of the task order. BAH told us that in its proposal for a modification to the task order, it proposed using the additional labor categories. However, according to the task order, the modification, and other CMS internal contract documents, no additional labor categories were added to the contract. During our review, we also identified payments to IBM and IFMC of about $231,000 and $3,000, respectively for labor categories that were not specifically listed in the terms of the task orders. In these four instances, CMS made questionable payments of over $1.7 million. Indirect cost rates exceeded contract ceiling rates – $17.6 million: CMS paid Palmetto, TrailBlazer, and Maximus for indirect costs that exceeded amounts allowed under indirect cost rate ceilings established in the respective contracts. The contract between CMS and Palmetto included acceptable indirect cost rates, based upon the indirect costs proposed by Palmetto, and applicable ceiling rates. Overhead was not included in the contract as an accepted indirect cost. Nevertheless, Palmetto billed, and CMS paid, at least $16.2 million of overhead costs. CMS told us that the contract was not modified to include overhead and that “for the government to continue business with in good faith... had to work with Palmetto as it transitioned to becoming CAS and FAR compliant.” Palmetto notified CMS that an overhead rate was added to its billing structure, yet CMS did not modify the contract to include the overhead rate. In addition, TrailBlazer billed nearly twice as much as the contract allowed for overhead. During 2006, CMS paid TrailBlazer $1.4 million for G&A and overhead costs greater than the amount allowed by rate ceilings in the contract between CMS and TrailBlazer. TrailBlazer told us that the indirect cost rate ceilings incorporated into its contract at the time of award were based on its accounting system that, at the time, was not compliant with CAS. Subsequently, in January 2006, when TrailBlazer changed its accounting system to be CAS compliant, the rate ceilings were no longer reflective of its billing structure. In June 2007, TrailBlazer submitted to CMS, its cognizant federal agency, a cost report supporting an increase to its indirect cost rates for 2006. However, CMS did not issue a modification to amend the contract and increase the indirect cost rate ceilings. CMS also paid Maximus $16,000 in excess of its G&A rate ceiling. In these three instances, CMS made questionable payments of over $17.6 million. Subcontractor costs exceeded approved amount – $489,000: CMS paid CGI about $489,000 for subcontractor costs above the not-to- exceed amount established when CMS approved CGI’s use of subcontractors. Improper use of contract type – $4.5 million: In February 2005, CMS issued a sole-source, T&M task order to IBM under a commercial Army contract to procure commercial services. Because the FAR prohibited the use of other than fixed-price contracts to procure commercial services at the time the task order was awarded, we questioned the payments to IBM under this task order totaling approximately $4.5 million. Travel costs exceeding limits – $11,000: CMS paid ViPS and CGI for travel costs that exceeded FAR limits incorporated in their contracts. The FAR prohibits contractors from billing for other-than- coach transportation or above set limits for hotels, meals and incidentals, and mileage reimbursement. In several instances, ViPS billed the government $299 or more a night, in one case as high as $799 a night, excluding taxes, for hotel stays in Manhattan. During the applicable period, the federal hotel per diem limit for Manhattan was at most $200 a night. Additionally, the contractor billed the government for business class train travel and amounts that exceeded the meals and incidentals per diem. Each of the 14 ViPS travel vouchers we tested included costs that exceeded allowed amounts. In total, we identified questionable payments of nearly $10,000 for ViPS travel. CMS also reimbursed CGI about $1,000 for travel costs in excess of allowed per diem limits. Inappropriate calculation of labor – $9,000: CMS paid Ketchum for labor costs that exceeded Ketchum’s actual costs for those services on a cost reimbursement contract. Ketchum did not adjust its hourly labor rates to bill for actual labor costs when exempt salaried employees (employees not eligible for overtime compensation) worked more than the standard hours in a pay period. By not adjusting (decreasing) the hourly labor rate to reflect the number of hours actually worked when an employee worked more than the standard hours, Ketchum charged the government more than its cost—the employee’s salary. For example, if an exempt employee earns $4,000 for working a 40-hour week, the employee’s hourly rate would be $100 ($4,000/40 hours). If that employee worked 50 hours in a week, the employee still earns $4,000 and the hourly rate would be adjusted to $80 ($4,000/50 hours). In this scenario, if the hourly rate were not adjusted, the contractor would have billed $5,000 ($100 * 50 hours) when its actual costs were only $4,000. Based on the labor transactions we selected for review totaling about $214,000, we estimated that CMS made about $9,000 of questionable payments as a result of Ketchum not adjusting its hourly labor rates. Labor costs inappropriately billed – $20,000: CMS paid nearly $20,000 to IFMC for vacation and sick leave that IFMC billed directly to the government. The FAR defines a direct cost as a cost that benefits a single cost objective (e.g., a contract) and an indirect cost as a cost that benefits more than one cost objective. Costs such as employees’ fringe benefits, vacation and sick leave, and other headquarters costs are common indirect costs. IFMC billed vacation and sick leave directly to contracts that an employee worked on only at the time the leave was taken. By billing vacation and sick leave as direct costs, IFMC may have billed more than CMS’s portion of the costs to CMS. For example, if an employee worked on one contract for 11 months and a new contract in the twelfth month and also took leave in the twelfth month, only the contract that the employee worked on in the twelfth month would bear the entire cost of the leave. Had IFMC included its costs associated with vacation and sick leave in its indirect cost rates, these costs would have been proportionally allocated to all of IFMC’s contracts. Therefore, some of the nearly $20,000 of questionable payments would likely be offset by an increase in the indirect cost rates; however, we could not determine what that amount would be. In total, IFMC billed CMS about $4.3 million for direct labor from June 2005 through January 2006. Because we only reviewed $152,000 of labor charges, the total labor billed by IFMC may include additional costs associated with vacation and sick leave. Labor rates in excess of contract terms – $31,000: CMS paid CGI for one labor category at rates higher than the rates allowed in its T&M contract, resulting in additional costs of about $31,000. According to CGI, it intends to issue a credit to CMS for the overbilling. Duplicate billing – $95,000: CMS paid about $95,000 for equipment that CGI billed twice. CGI discovered the double billing for equipment as a result of our audit and subsequently issued a credit to CMS for the double billing. Under a cost reimbursement contract, in which a contractor bills the government for allowable costs to achieve the contract objectives, the FAR requires the contractor to maintain adequate accounting systems and other documentation to support the amounts the contractor bills. For example, the FAR requires contractors to maintain documentation, such as time sheets, pay information, or vendor invoices. Additionally, FAR stipulates that supporting documentation must be maintained for 3 years after the final payment. We identified about $62.7 million of questionable payments for unsubstantiated contractor costs that were not adequately supported. For each of the questionable payments described below, a portion of the questionable payment most likely relates to allowable costs, but due to the different facts and circumstances involved, we were unable to determine whether or to what extent the costs were allowable, reasonable, and allocable. As a result, some portion of the total amount of questionable payments we identified ultimately may be determined by CMS to be allowable and therefore not recoverable from the contractor. Unsupported contractor costs – $50.8 million: CMS paid $40.6 million to Palmetto for costs that were not adequately supported and $10.2 million to Pearson for subcontractor costs related to Palmetto that were also not adequately supported. CMS’s cost/price team’s review of Palmetto’s proposal identified numerous concerns about Palmetto’s ability to record and bill costs. Specifically, the cost/price team noted that Palmetto’s accounting practices were not compliant with several CAS requirements, its labor system did not distinguish between direct labor and vacation time, and its accounting system did not use indirect cost rates. The cost/price team also indicated that Palmetto was working on addressing these issues, but that it would probably be a lengthy process because of the numerous deficiencies. Despite the concerns about Palmetto’s ability to record and bill costs, CMS awarded Palmetto three cost reimbursement contracts, contrary to the FAR requirement that the contractor must have an adequate accounting system for recording and billing costs. In this instance, CMS’s decision to award cost reimbursement contracts to a contractor with accounting system deficiencies and CMS’s failure to establish Palmetto’s indirect cost rates inhibited our ability to audit the costs billed to CMS. In response to our request for transaction-level detailed reports of costs billed to CMS, Palmetto officials told us that its accounting systems could not generate a report that summarized the costs billed to CMS and that invoices were created manually by allocating costs (direct and indirect) from its cost centers. In addition, we were told that prior to June 2005, Palmetto did not require its salaried employees to use time sheets. Even though Palmetto told us its salaried employees were not required to use time sheets, Palmetto was able to provide many time sheets to support labor costs it billed. To gain an understanding of the type of information available that Palmetto could provide to support its other direct costs billed, we asked Palmetto to support the costs billed on four invoices. In response, Palmetto provided travel vouchers, subcontractor invoices, and numerous cost center reports and spreadsheets. The travel vouchers and subcontractor invoices supported the amounts billed to CMS. The cost center information represented costs that were directly allocated to the CMS contract. However, Palmetto did not support how it determined the percentages it used to allocate the costs to the CMS contract. Further, when we analyzed the cost center information, we noted several unusual transactions, including depreciation for office and cafeteria furniture, computer equipment, and basketball goals; building and lawn maintenance; and janitorial, security, and recycling services. Because these costs could reasonably benefit more than one cost objective or contract, these types of costs are generally included in a contractor’s indirect cost rates rather than billed directly to a contract. Essentially, to audit these costs, all of Palmetto’s operations—not just the costs allocated to the three CMS contracts included in our review—would need to be audited to determine whether the costs were allowable. This type of contractor oversight is normally performed by the cognizant federal agency, which for Palmetto is CMS. Because of the uncertainties associated with Palmetto’s other direct costs (which based on the cost center reports seem to include significant allocations of indirect costs) we concluded that we were unable to audit the other direct costs (excluding travel and subcontractor costs) totaling $6.1 million billed to CMS prior to June 2005 when Palmetto changed its accounting system to be compliant with CAS. In addition, we could not verify the allowability and reasonableness of $34.5 million of indirect costs billed to CMS on the three Palmetto contracts covering 2004 through 2006. On a cost reimbursement contract, indirect costs can be a substantial portion of the total contract cost. FAR requires that 6 months after the close of a year, contractors with cost reimbursement contracts must submit a report of their final costs to their cognizant federal agency. On October 2, 2006, Palmetto submitted to its CMS contracting officer a report of its 2005 final costs. However, CMS may not have realized that Palmetto submitted this report because, according to a letter from CMS’s cost/price team to Palmetto dated June 4, 2007, CMS notified Palmetto that its 2004 and 2005 final cost reports were delinquent according to FAR. Further, as of October 2007, Palmetto had not provided to CMS its final cost report for 2006, which is delinquent according to the FAR. Because Palmetto’s final cost reports for 2004, 2005, and 2006 have not been audited by CMS its cognizant agency, Palmetto’s final indirect cost rates have not been established. Further, provisional indirect cost rates have not been established. Therefore, we did not have support to verify the allowability and reasonableness of the indirect costs that were billed. Moreover, as discussed above, it appeared that indirect costs from Palmetto’s cost centers were directly allocated to the CMS contract. As a result, there is considerable risk that CMS may have been billed twice for Palmetto’s indirect costs—once as an allocated direct cost and again as an indirect cost. The issues described above related to Palmetto’s other direct costs and indirect costs also affected the amounts CMS paid to Pearson for Palmetto as a subcontractor. As a result, additional payments totaling $10.2 million were unsupported. Because of these numerous concerns described above and lack of documentation to verify amounts billed, CMS made questionable payments totaling $50.8 million ($6.1 million, $34.5 million, and $10.2 million), which represents the direct and indirect costs that were not adequately supported during our review. Unsupported contractor costs – $9.7 million: CMS paid about $9.7 million to TrailBlazer for costs that TrailBlazer did not adequately support related to a cost reimbursement contract ($4.8 million) and a portion of its Medicare contract ($4.9 million) paid with MMA funds. After numerous requests spanning over 7 months, TrailBlazer did not provide us with adequate documentation supporting the amounts billed to CMS for these contracts. For the cost reimbursement contract, the $4.8 million that TrailBlazer did not adequately support included $2.4 million of labor costs, $654,000 of other direct costs, and $1.8 million of indirect costs. For the labor costs, TrailBlazer told us that only its parent company could provide transaction information, which was never provided. Instead, TrailBlazer provided several reports summarizing labor and other direct costs; however, we could not use these reports because they did not reconcile to the amounts billed to CMS and often included only summary level information. For the indirect costs, generally these costs are supported with provisional or final indirect cost rates that have been audited by a contractor’s cognizant federal agency. However, as of October 2007, CMS, TrailBlazer’s cognizant federal agency, has not ensured that TrailBlazer’s indirect cost rates were audited. TrailBlazer submitted a cost report of its indirect costs for 2006 to CMS in June 2007. For the $4.9 related to the Medicare contract, TrailBlazer provided a one-page document that summarized the total amount by types of costs, such as salaries, equipment, and fringe benefits. This was not sufficient for us to review the costs. Unsupported indirect costs – $1.2 million: CMS paid at least $1.2 million to Ketchum for indirect costs that were not adequately supported with recently audited provisional or final indirect cost rate information. From May 2004 through October 2006, CMS paid Ketchum for indirect costs based on indirect cost information from 1999. Because FAR calls for indirect cost rates to be based on recent information and established annually, rates based on information from 1999 did not adequately support costs billed in 2004 through 2006. Further, in our review of the contract file, we noted documentation from 2004 that alerted CMS to potential issues with Ketchum’s indirect cost rates—namely, that the rates were too high. In September 2006, Ketchum submitted cost reports for its 2001 through 2005 actual indirect costs. According to Ketchum officials, CMS, as the cognizant federal agency, has recently initiated an audit of this indirect cost rate information to establish final rates for these years. Unsupported labor costs – $383,000: Based on the task orders in our review, we estimated that $383,000 of BearingPoint’s billings for labor and fringe benefits costs were not adequately supported. BearingPoint was unable to provide us with support for certain key elements of the labor and fringe benefits costs it billed on the five task orders in our review. Unsupported transactions – $463,000: During our audit, contractors could not adequately support several miscellaneous transactions totaling $463,000. Palmetto billed CMS for about $79,000 of labor and about $323,000 of Kelly Services costs which it did not support with documentation such as time sheets or vendor invoices. Therefore, we were unable to verify the amounts billed. IFMC billed CMS for about $49,000 of other direct costs such as referral bonuses and placement fees that IFMC did not adequately support. In some cases, IFMC provided invoices for the costs but did not provide support that would enable us to verify that these costs solely benefited and were directly allocable to the CMS contract. BearingPoint billed CMS for about $5,000 of other direct costs which it did not support with vendor invoices. Therefore, we could not verify the amounts billed. CGI billed CMS for about $5,000 of other direct costs which it did not support with vendor invoices. Therefore, we could not verify the amounts billed. Maximus billed CMS for about $2,000 of other direct costs which it did not support with documentation that would allow us to verify that these costs were directly allocable to the CMS contract. Unsupported contractor costs – $60,000: CMS paid BAH more than $60,000 for intercompany labor costs billed on a cost reimbursement contract that the contractor did not adequately support the rates billed to CMS. For example, on one task order, the intercompany hourly rates, on average, were nearly 14 times higher than the average hourly rate for other BAH employees and almost 6 times higher than the next highest BAH employee. We noted that in a proposal review, CMS’s cost/price team raised a concern that BAH’s proposed intercompany hourly rates were “excessive and unreasonable” and requested BAH to provide support for the proposed rates. Even though BAH refused to provide the support to CMS, CMS awarded the contract. We noted that some of the rates BAH charged for intercompany labor exceeded the proposed rates that were questioned by CMS by, on average, 65 percent. BAH did not provide us support for the rates, but stated that the rates were commercial billing rates priced based on the private sector market. Unsupported labor costs – $90,000: CMS paid Ketchum for labor costs that Ketchum could not support were appropriately allocated to the CMS contract. For cost reimbursement contracts, contractors generally calculate an employee’s hourly labor rate by dividing the employee’s annual salary by 2,080 hours (the standard number of work hours in a year). Ketchum calculated standard hourly labor rates based on 1,880 hours, which increased the hourly rates to account for employees’ leave time. However, this calculation method assigned costs for leave time regardless of whether the leave was taken (when the actual cost occurs). Generally, contractors include the costs of leave time in indirect cost rates, which allocate costs proportionally to all contracts, and when the indirect cost rates are finalized, billed costs are adjusted based on actual costs. Because Ketchum incorporated expected leave time in its hourly labor rates, its billings to CMS would not be adjusted based on its actual costs. Since we were not able to verify that the cost of the leave was appropriately allocated to the CMS contracts, we estimated that CMS made almost $90,000 of questionable payments as a result of Ketchum using 1,880 hours instead of 2,080 to calculate hourly rates. A portion of the $90,000 would likely be offset by an increase in indirect costs if Ketchum had allocated its leave time to its indirect cost rates. During our review, we identified certain contracting practices that increased the risk that CMS did not obtain the best value, thus leading to potential waste. Therefore, we question whether certain contract costs were an efficient use of government resources or might have been avoided. Waste involves the taxpayers in the aggregate not receiving reasonable value for money. Importantly, waste involves a transgression that is less than fraud and abuse. Most waste does not involve a violation of law or regulation but rather relates to mismanagement or inadequate oversight. We identified $6.6 million of questionable payments for which CMS may not have received the best value. Because waste is generally caused by mismanagement or inadequate oversight, the total amount of questionable payments we identified may not be recoverable from the contractor. Excess subcontractor costs – $1.4 million: CMS missed opportunities to save about $1.4 million associated with costs Z-Tech, IBM, and CGI billed for subcontractors under T&M contracts. According to DCAA, the “T&M payments clause,” generally included in T&M contracts, required that contractors bill the government for subcontractor labor hours at cost. GSA took the position that prime contractors should bill for subcontracted labor at the prime contractor’s own labor rates (regardless of the contractor’s cost). DCAA stated that such a practice places the government at a greater risk of paying costs higher than what prime contractors actually pay without receiving any additional benefits. Further, DCAA noted that the practice incentivizes contractors to maximize profits by subcontracting more work and forces the government to expend additional resources to monitor the subcontracted labor. We noted three instances where CMS allowed prime contractors to bill subcontractor labor hours at their own labor rates rather than the lower actual cost. For example, IBM paid about $1.1 million for its subcontractor labor but billed CMS about $2.0 million, representing an increase of about $900,000 or over 80 percent. Likewise, CGI billed CMS about $420,000, or about 60 percent, more than the amount CGI paid for subcontractor labor and Z-Tech billed CMS about $91,000, or nearly 35 percent, more than the amount Z-Tech paid for subcontractor labor. According to Z-Tech and CGI, they both notified CMS of their plans to bill subcontractor labor hours under their own labor rates (rather than actual cost) in their contract proposals, which were accepted by CMS. Further, because CMS inappropriately issued IBM’s T&M contract off a commercial contract, as previously discussed, the commercial contract did not contain the T&M payments clause. Because CMS did not proactively limit the contractor’s billings for subcontractor services to cost, CMS missed an opportunity to save, in total, about $1.4 million. Additional costs billed by prime contractors – $4.2 million: CMS paid Palmetto and Pearson additional costs due to subcontracting arrangements that may have been avoided. As previously mentioned, Palmetto and Pearson each had a prime contract with CMS and subcontracted with each other for similar services. For Palmetto’s prime cost reimbursement contract with CMS, Palmetto applied indirect costs and fees to the amounts it billed CMS for the subcontracted work provided by Pearson, which already included Pearson’s indirect costs and fees. As a result, two layers of indirect costs and fees were applied to the same services. If CMS had not permitted this subcontracting relationship, the additional layer of indirect costs and fees applicable to Palmetto’s billings, totaling $3.6 million, may have been avoided. Likewise, CMS paid Pearson an additional $630,000 in fees that may not have been paid absent the subcontract agreement. Unallowable costs included in indirect cost rates – $953,000: Prior to September 2005, CMS did not require CGI to exclude independent research and development (IR&D) costs from its indirect cost rates. The HHSAR states that IR&D costs are unallowable; however, according to CGI, CMS did not incorporate the HHSAR clause into CGI’s contract. CGI agreed to prospectively revise its indirect cost rates to exclude IR&D once they were made aware of the clause. For fiscal year 2005, CMS paid CGI about $953,000 for IR&D costs that were included in CGI’s indirect cost rates. We were unable to calculate the financial impact prior to fiscal year 2005 because CGI did not separately quantify the IR&D component of its indirect rates prior to this point. If CMS failed to include this HHSAR clause in other contracts with CGI or other contractors, this could result in additional waste. CMS management has not allocated sufficient resources, both staff and funding, to keep pace with recent increases in contract awards and adequately perform contract and contractor oversight. This poor operating environment created vulnerabilities in the contracting process. CMS’s preaward contracting practices were driven by expediency rather than obtaining the best value and minimizing the risk to the government. Likewise, CMS was not proactive in fulfilling its cognizant federal agency responsibilities, which not only increased its own risk but the risk of other agencies that use the same contractors. Further, significant deficiencies in internal controls over contractor payments, such as inadequate policies, procedures, and training to guide its invoice review process, increased the agency’s risk of improper payments. By not timely performing contract closeout audits, CMS may have missed opportunities to detect and recover improper payments. Without immediate corrective actions and appropriate high-level management accountability to fix systemic issues, CMS will continue to be highly vulnerable to waste and improper payments. Moreover, if these issues are not promptly corrected, the Medicare claims administration contracting reform called for in MMA will result in billions of additional dollars of contracting activities being subject to these same deficient contracting practices and internal controls, and exacerbate the potential waste and improper payments. We are making the following nine recommendations to the Administrator of CMS to improve internal control and accountability in the contracting process and related payments to contractors. We recommend that the Administrator take the following actions: Develop policies and criteria for preaward contracting activities including (1) appropriate use of competition exemptions such as logical follow-on agreements, unusual and compelling urgency, and SBA’s 8(a) program; (2) analysis to justify contract type selected, as well as, if applicable, verification of the adequacy of the contractor’s accounting system prior to the award of a cost reimbursement contract; and (3) consideration of the extent to which work will be subcontracted. Develop policies and procedures to help ensure that cognizant federal agency responsibilities are performed, including (1) monitoring CAS compliance, (2) a mechanism to track contractors for which CMS is the cognizant federal agency, and (3) coordination efforts with other agencies. Develop agency-specific policies and procedures for the review of contractor invoices so that key players are aware of their roles and responsibilities, including (1) specific guidance on how to review key invoice elements; (2) methods to document review procedures performed; and (3) consideration to circumstances that may increase risk, such as contract type or complex subcontractor agreements. Prepare guidelines to contracting officers on what constitutes sufficient detail to support amounts billed on contractor invoices to facilitate the review process. Establish criteria for the use of negative certification in the payment of a contractor’s invoices to consider potential risk factors, such as contract type, the adequacy of the contractor’s accounting and billing systems, and prior history with the contractor. Provide training on the invoice review policies and procedures to key personnel responsible executing the invoice review process. Create a centralized tracking mechanism that records the training taken by personnel assigned to contract oversight activities. Develop a plan to reduce the backlog of contracts awaiting closeout. Review the questionable payments identified in this report to determine whether CMS should seek reimbursement from contractors. In written comments on a draft of this report (reprinted in their entirety in appendix III), CMS stated that it would take action on each of our recommendations and described steps taken and others planned to address our recommendations. At the same time, CMS disagreed with some of our findings. Where appropriate, we incorporated changes to our report to provide additional clarification. In its comments, CMS stated that the contract actions we reviewed were not representative of CMS’s normal contracting procedures and stated that the unique circumstances of the implementation of MMA, including the unusually short implementation period, required it to complete an unusually large number of contract actions on the basis of other than full and open competition. We acknowledge that the time frames for implementing MMA added schedule pressures for CMS. At the same time, the compressed time frames and the resulting contracting practices added risk to the contracting process. Many of the findings in our report are a result of the increased risk together with inadequate compensating controls to mitigate risk. Further, in its comments, CMS disagreed with our finding that it made nearly $90 million in questionable payments. CMS also stated its belief that it was appropriate for contracting officers to approve invoices for payment based on the information provided with the invoices, and that the payments were interim payments that would be audited at a later date. CMS also stated that the questionable payments we identified were based on our review of the contractors’ books and records rather than the invoice amounts. CMS stated that it is premature to conclude that questionable payments exist because it has not conducted a detailed audit of the invoices for the contracts in question. We disagree. We found amounts that were clearly questionable. Our report also clearly states that in some cases, due to the facts and circumstances involved, we were unable to determine whether or to what extent the costs we questioned were allowable, reasonable, and allocable. As a result, some portion of the total amount of questionable payments we identified ultimately may be determined by CMS to be allowable. However, we also state that given CMS’s poor control environment and the fact that our work was not designed to identify all questionable payments made by CMS or to estimate their extent, other questionable payments may have been made. Further, CMS did not always ensure that contractors provided adequate detail supporting the invoices to allow responsible CMS personnel to sufficiently review and approve invoices. Regarding contract audits, CMS had not demonstrated a willingness to allocate the necessary funding; thus audits have not taken place in a timely manner. In addition, while we agree that an audit of contract costs can provide a detective control to help determine whether contractor costs were proper, CMS’s reliance on an after-the-fact audit is not an acceptable substitute for the real-time monitoring and oversight of contractor costs—preventative controls—that we recommend in this report. Effective internal control calls for a sound, ongoing invoice review process as the first line of defense in preventing unallowable costs and improper payments. Finally, many of the questionable payments we identified were based on our review of invoices and documentation received by CMS at the time of payment and did not require additional detail from the contractors’ books and records. For example, our findings regarding indirect costs, labor categories, and unallowable travel costs could have been identified by CMS with an adequate review of the invoices and information they received from the contractors. In response to our recommendations to improve controls over its contracting process and related payments, CMS stated in its comments that it has taken or will take the following actions: continue to evaluate and update its policies and procedures to make review its policies and criteria for the use of cost reimbursement contracts and the need for approved accounting systems, review and update policies and procedures as appropriate and provide training regarding subcontracting, develop appropriate procedures to support HHS in its cognizant federal update its invoice review and payment policies and procedures as develop comprehensive training on the invoice review and approval require the use of a governmentwide system to track the training taken by personnel assigned to contract oversight, continue to reduce its backlog of contracts awaiting closeout, and obtain contract audits related to our identified questionable payments and seek reimbursement for any costs found to be unallowable. In addition, our responses to a number of specific CMS comments are annotated and included at the end of appendix III. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its date. At that time, we will send copies to the Secretary of Health and Human Services, Administrator of the Centers for Medicare and Medicaid Services, and interested congressional committees. Copies will also be available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-9471 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are acknowledged in appendix IV. As shown in table 4, we identified numerous questionable payments totaling nearly $90 million that represent potentially improper, unsubstantiated, or wasteful payments. In some cases, due to the facts and circumstances involved, we were unable to determine whether or to what extent the costs were allowable, reasonable, and allocable. As a result, some portion of the total amount of questionable payments we identified ultimately may be determined by the Centers for Medicare and Medicaid Services (CMS) to be allowable and therefore not recoverable from the contractor. Given CMS’s poor control environment and the fact that our work was not designed to identify all questionable payments made by CMS or to estimate their extent, other questionable payments may have been made. Because CMS sometimes used other funding sources in addition to Medicare Prescription Drug, Improvement, and Modernization Act of 2003 (MMA) funds to pay invoices for one contract, we were not always able to identify specific costs that were paid with MMA funds. As a result, the scope of our review extended beyond payments made with MMA funds for some contracts and the questionable payments we identified may not have been paid solely with MMA funds. To determine how the Centers for Medicaid and Medicare Services (CMS) used the $1 billion Medicare Prescription Drug, Improvement, and Modernization Act of 2003 (MMA) appropriation, we obtained obligation and disbursement transactions from CMS’s financial systems from the period January 2004 through December 2006 that CMS charged against the MMA appropriation. We scanned these data files for obvious omissions or errors in key data fields. To verify the completeness of the files, we reconciled the total obligated amount to the MMA appropriation and reconciled the liquidated obligation amount (a field within the obligation data file) to the disbursement data totals. To determine the recipients of the MMA appropriation, we categorized disbursement data by payee category (contractors, government agencies, state government agencies, etc.) based upon the vendor name in the file. Because CMS recorded about $536 million of its disbursement to one budget object code, “other services,” we were unable to use CMS’s budget object codes to determine the services provided by contractors and vendors. Therefore, to categorize expenditures to contractors and vendors by activity (information technology, 1-800-MEDICARE help line, etc.), we reviewed the project titles in CMS’s contracts database for all contracts with total disbursements greater than $1 million; if the contract title was unclear, we reviewed the statement of work in the contract file. We also categorized some additional contracts based on our detailed review of selected contractors. To identify additional details on the services obtained with MMA funds, we (1) analyzed contract files including statements of work, (2) analyzed interagency agreements, (3) discussed employee-related costs with CMS officials, (4) discussed payments to state agencies with CMS officials overseeing the State Health Insurance Assistance Program as well as certain state agency officials, and (5) analyzed purchase card transaction statements and supporting receipts and discussed these purchases with applicable CMS officials. To determine whether CMS’s contracting practices and related internal controls are adequate to avoid waste and to prevent or detect improper payments, we interviewed CMS officials including contracting officers, contracting specialists, project officers, cost/price team members, financial management officials, and Office of Acquisition and Grants Management (OAGM) management about oversight responsibilities; analyzed contract files and invoices; and assessed the sufficiency of CMS policies, procedures, and training. As criteria, we used our Standards for Internal Control in the Federal Government and the Federal Acquisition Regulation (FAR). We focused our internal control work on the contractors that received the most MMA funding, based on the CMS disbursement data. We also selected contractors with other risk factors such as billing or accounting system problems for review. Our approach resulted in the selection of 16 contractors. For these 16 contractors, we then selected contracts to use for our work based on contracts that were funded with at least $1.5 million of the MMA appropriation. As a result, we nonstatistically selected 16 contractors and 67 contracts with a total contract value of $1.6 billion. One contract selected was a Medicare contract. Because Medicare contracts were not subject to FAR, we did not include this contract in our internal control review. Therefore we evaluated CMS contracting practices and related internal controls for 66 contracts. Additionally, we obtained from CMS information related to oversight resources from fiscal year 1997 through 2006, the closeout backlog, and its cognizant federal agency duties. We discussed cognizant federal agency oversight activities with and obtained documentation such as indirect cost rate agreements or audit reports from the National Institutes of Health and the Defense Contract Audit Agency. To determine whether payments to contractors were properly supported as a valid use of government funds, we started with the same 67 contracts we had nonstatistically selected. We further refined the list of 67 contracts based on individual contract values and other risk factors such as contract type to arrive at a selection of 47 contracts for which we reviewed CMS payments to contractors. Because CMS sometimes used other funding sources in addition to MMA to pay invoices for one contract, we were not always able to identify specific costs that were paid with MMA funds. As a result, the scope of our review extended beyond payments made with MMA funds. This nonstatistical selection methodology resulted in a selection of CMS payments to contractors totaling $595.4 million, of which $355.5 million was paid with MMA funds. The following table summarizes the number of contracts and amounts of CMS payments to contractors included in our review, as well as the amount paid with MMA funds. For the 47 contracts, we performed forensic auditing techniques, data mining, and document analyses to select contractor costs billed to CMS to test. Because we selected individual or groups of transactions for detailed testing to determine whether costs were allowable, the amount of contract payments we tested was lower than the amount of payments included in our review shown in table 5. Following is a description of the types of procedures we used to test transactions. Labor costs: We obtained from contractors their databases of hours charged to CMS that included detailed information such as employee name, hours worked per pay period, and pay rate information. Using this information, we selected labor transactions for testing based on quantitative factors such as (1) number of hours worked, (2) dollar amount billed, (3) labor rates, or (4) anomalies in the data. For these nonstatistical selections, we compared the information to supporting documentation obtained from the contractor, including time sheets and payroll registers and discussed billed amounts with contractor officials. Subcontractor, travel, and other direct costs: When contractor invoices did not provide sufficient information, we obtained additional information from the contractor, such as databases of transaction-level detail, to select specific transactions based on criteria such as amount billed, vendor names, and potential duplicate payments. We compared our nonstatistical selections to applicable supporting documentation such as vendor invoices, travel vouchers and receipts, and subcontract agreements provided by the contractor. Indirect costs: We verified the appropriateness of indirect costs billed by recalculating the amounts and comparing the rates billed to provisional and final indirect cost rates and contract ceilings. Analytical procedures: We performed a variety of analytical procedures including recalculating invoice line items for mathematical accuracy and reviewing invoice amounts for trends and anomalies. We questioned payments for costs that were potentially improper by assessing whether the costs did not comply with the terms of the contract or applicable regulation (FAR, the Health and Human Services Acquisition Regulation, and Federal Travel Regulation) or that were unsubstantiated because the contractor did not provide adequate support for us to determine whether the costs were allowable. In addition, we questioned payments for which we had concerns that risks in CMS’s contracting practices may have resulted in waste. When calculating our questionable payment amounts, where applicable for costs not compliant with contract terms and regulations we added the respective indirect costs that the contractor charged on the item in question. For some of the questionable payments we identified, a portion of the cost is most likely appropriate; however, because of certain facts and circumstances involved, we were unable to determine whether or to what extent the costs were allowable, reasonable, and allocable. Therefore, we questioned the entire amount associated with the uncertainties. Because CMS sometimes used other funding sources in addition to MMA to pay invoices, the scope of our review extended beyond the payments made with MMA funds. Therefore, questionable payment amounts do not relate exclusively to MMA funds. While we identified some payments as questionable, our work was not designed to identify all questionable payments or to estimate their extent. We provided CMS a draft of this report for review and comment. CMS provided written comments, which are reprinted in appendix III of this report. We also discussed with CMS contractors any findings that related to them. We conducted this performance audit in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We conducted our audit work in Washington D.C. and Baltimore, Maryland from March 2006 through September 2007. 1. See “Agency Comments and Our Evaluation” section. 2. The contracting authority CMS referred to (Section 1857(c)(5)) applies specifically to Medicare Advantage contracts (formerly referred to as Medicare+Choice contracts) and prescription drug plan contracts and does not apply to the types of contracts included in our review. 3. As stated in our report, CMS paid $735.4 million of its MMA funds for start-up administrative costs to contractors and vendors. Our review included 67 contracts with a total contract value of $1.6 billion, of which $508.4 million was paid with MMA funds. Our sample covered about 69 percent of the MMA funds paid to contractors and vendors. 4. CMS compared the percentages of noncompetitively awarded and logical follow-on task orders that were included in our review to statistics it calculated for its 2007 contracting actions. The percentages related to our review are not comparable to the statistics CMS presented primarily because the percentages were calculated differently. Our percentages were based solely on the number of contracts in our review and included several years. Our calculation showed that 45 percent of contracts in our review were awarded without the benefit of competition. CMS used fiscal year 2007 contracts, which were outside the scope of our review, to arrive at a total of $255 million awarded on a noncompetitive basis for that fiscal year. Furthermore, CMS calculated the percentage of noncompetitive awards for fiscal year 2007 by comparing the number of noncompetitive contracts to the total number of contract actions. Contract actions likely include contract modifications, and one contract could have several modifications. For example, one of the large information technology contracts in our review had over one hundred modifications (contract actions). 5. CMS stated that it had to use cost reimbursement contracts because MMA was an entirely new initiative. We present the statistics about cost reimbursement contracts to add perspective due to the increased risk associated with these types of contracts. 6. As stated in our report, CMS awarded cost reimbursement contracts to Palmetto despite CMS’s own cost/price team’s determination that the contractor had numerous accounting system deficiencies. The chart CMS referred to is our summary of CMS’s fulfillment of its cognizant federal agency responsibilities. The chart illustrates instances in which CMS did not sufficiently assess the adequacy of the contractor’s accounting system. The chart is not intended to present a conclusion about the adequacy of the contractors’ accounting systems. 7. Because certain cognizant federal agency oversight responsibilities at HHS were assigned to CMS, as discussed in our report, we believe it is CMS’s obligation to ensure that those responsibilities are performed. In addition, we added wording to our report to clarify that we refer to CMS as the cognizant federal agency in this report because HHS delegated cognizant federal agency responsibilities to CMS. 8. We modified our report to clarify that we reviewed CMS’s Acquisition Policy – 16 Subject: Invoice Payment Procedures, August 2005. 9. CMS issued the demand letter to Maximus as a result of our preliminary audit findings. Staff members who made key contributions to this report include: Marcia Carlsen (Assistant Director), Richard Cambosos, Timothy DiNapoli, Abe Dymond, Janice Friedeborn, Leslie Jones, Jason Kelly, Steven Koons, John Lopez, Meg Mills, Kara Patton, Ronald Schwenn, Omar Torres, Ruth Walk, and Doris Yanger. | The Medicare Prescription Drug, Improvement, and Modernization Act of 2003 (MMA) established a voluntary outpatient prescription drug benefit, which is administered by the Centers for Medicare and Medicaid Services (CMS). CMS relies extensively on contractors to help it carry out its basic mission. Congress appropriated to CMS $1 billion for start-up administrative costs to implement provisions of MMA. Because CMS had discretion on how to use the appropriation, Congress asked GAO to determine (1) how CMS used the $1 billion MMA appropriation, (2) whether CMS's contracting practices and related internal controls were adequate to avoid waste and to prevent or detect improper payments, and (3) whether payments to contractors were properly supported as a valid use of government funds. To address objectives two and three above, our review extended beyond contract amounts paid with MMA funds. CMS expended over 90 percent of the MMA appropriation by the end of December 2006. The majority, about $735 million, was paid to contractors and vendors for a variety of services. For example, because the volume of calls to the 1-800-MEDICARE help line significantly increased with the new outpatient prescription drug benefit, two contractors were paid about $234 million to support the help line. CMS also made payments to other federal agencies for services such as printing and mailing; to state agencies to fund educating the public; for CMS employee payroll and travel costs; and for purchase card transactions to acquire office supplies, equipment, and outreach materials. CMS management has not allocated sufficient resources, both staff and funding, to keep pace with recent increases in contract awards and adequately perform contract and contractor oversight. This operating environment created vulnerabilities in the contracting process. Specifically, CMS did not adequately fulfill critical contractor oversight, such as working with contractors to establish indirect cost rates. Further, certain contracting practices, such as the frequent use of cost reimbursement contracts, increased risks to CMS. After contract award, pervasive internal control deficiencies increased the risk of improper payments. Because CMS did not have clear invoice review guidance, invoice review procedures were often flawed or did not take place. CMS also had not taken steps to ensure contracts were closed within required deadlines and had a backlog of approximately 1,300 contracts as of September 30, 2007. GAO identified numerous questionable payments totaling nearly $90 million. These payments were for costs not compliant with contract terms, which could be potentially improper; costs for which we could not obtain adequate support to determine whether the costs were allowable; and potential waste caused by risks in CMS's contracting practices. Importantly, in some cases, because we were not able to determine whether or to what extent the costs were allowable, some of the questioned amounts may relate to allowable costs that are not recoverable. The table below summarizes the questionable payments GAO identified. |
The Peace Corps was created in 1961 to help countries meet their needs for trained manpower. In addition, it was meant to provide a new expression of U.S. character and foreign policy—an idealistic sense of purpose and a means of countering the expansion of communism throughout the world. It was anticipated that through contact at the grassroots level, Peace Corps volunteers would help promote a better understanding of the American people, who in turn would better understand cultures of other peoples. The end of the Cold War presented the Peace Corps with an historic opportunity: For the first time, the countries of the former Eastern bloc became open to Western economic and technical assistance. In July 1989, the President announced that Peace Corps volunteers would teach English in Hungary. Shortly thereafter, new programs were started in Poland and Czechoslovakia, then successively throughout Central and Eastern Europe. In December 1991, the Secretary of State announced that he would like to see at least 250 volunteers placed in the states of the former Soviet Union by the end of 1992. From 1989 through 1993 the Peace Corps established 18 new country programs throughout Central and Eastern Europe and the former Soviet Union. During this period of expansion into Europe and Central Asia, the Peace Corps also opened or reopened 20 new programs in Africa, Asia, and Latin America. Together, these new programs raised the total number of countries served by Peace Corps to 93—an increase of 43 percent since 1989. These new country programs represent the largest increase of new programs since the first years of the agency’s existence. Peace Corps programs in the former Eastern bloc countries were selected in consultation with the host governments and in concert with the Department of State, which is responsible for coordinating U.S. assistance to the region. The Peace Corps concentrated its development assistance in the region in three program areas: the Teaching English as a Foreign Language (TEFL) program; a small business development program, which provided technical assistance in such areas as privatization, marketing, management, and business education; and a program in the environmental sector to promote environmental awareness and education. During fiscal year 1993, an average of 665 volunteers served in former Eastern bloc countries. The volunteers serving in this region are, on average, older and more experienced than the average Peace Corps volunteer. The average age of all Peace Corps volunteers is 32. The average age of volunteers serving in the region is 37, with small business development volunteers averaging 40 years of age. In addition, many of the business volunteers hold advanced degrees and have significant work experience. The Peace Corps policy and procedures manuals describe numerous, often interdependent steps for opening new overseas posts. These manuals are generally comprehensive and sound and, if followed, should result in effective programs. The Peace Corps’ Policy Manual, for example, identifies the following as necessary steps: consulting with host country officials, assessing a country’s needs, and determining which Peace Corps programs can best address those needs; negotiating agreements with host country officials regarding the Peace Corps and host country services and support to be provided; recruiting and selecting country staff; establishing administrative support services, including obtaining office space and medical and banking services; identifying and developing volunteer work sites; identifying and recruiting volunteers in the necessary numbers and with the requisite skills to implement country program plans; and designing and conducting in-country technical, language, health and safety, and cross-cultural training programs to prepare volunteers for their assignments. The Peace Corps’ newly developed Programming and Training System (PATS) manual provides additional guidance for starting new volunteer projects. This manual defines project criteria, field staffs’ efforts with host country and other foreign assistance agencies to identify and define the scope of work for individual projects, and volunteers’ training, placement, and support. Peace Corps guidance also provides information on the sequencing of various steps in the program development process. For example, consultations with host country officials on a country’s needs should precede an assessment of those needs; country agreements should be completed and signed before the country office is established and opened and staff arrive; and project identification and development and volunteer training programs should be in place before the volunteers’ arrival. In its attempt to quickly begin programs in Central and Eastern Europe and the former Soviet Union, the Peace Corps often did not follow its established guidance when starting its programs. Many of the steps necessary to introduce effective programs were rushed, done superficially, or not done at all. Consequently, many of the new programs we examined were poorly designed and faced a host of other problems, including the lack of qualified staff, the assignment of volunteers to inappropriate or underdeveloped projects, insufficient volunteer training, and volunteer support systems that did not work. These problems frustrated many volunteers who had joined the Peace Corps to contribute to the region’s development and contributed to a relatively high resignation rate among the volunteers. The Peace Corps relied on consultants or staff who lacked adequate cultural or language knowledge to develop sector plans. These personnel were often under pressure to work quickly and did not have time to learn about local conditions or cultivate a common understanding with host country officials. For example, the Peace Corps assigned a staff person on temporary duty from the Philippines to design its environment program in Poland, even though the person did not know the language and had no previous experience in the region. As a result, the program’s design did not address Poland’s environmental goals or have much impact. In another case, a consultant for the Peace Corps designed Russia’s Far East small business program without traveling to the region to assess its business situation. Once drafted, sector plans were not systematically reviewed by senior Peace Corps management officials on a timely basis. Peace Corps personnel who normally provide technical support to country programs told us that they were usually left out of the review process. When reviews did take place they were often cursory or were done after volunteers were already in the country. For example, Russia’s small business project plans were not reviewed by the Peace Corps’ technical support officials until several months after volunteers were at their sites. These critiques identified a number of gaps in the planning process, such as the failure to identify assignments before volunteers were placed at sites. This later turned out to be a critical problem. Finally, the Peace Corps’ senior management did not formally approve country program plans prior to their implementation. We were told that the Peace Corps’ regional directors are ultimately responsible for ensuring the adequacy of country program plans in their regions and are given significant authority and autonomy to ensure that their programs are effectively managed. However, they often did not carry out this responsibility, and management oversight of the country programs we visited appeared to be minimal. For example, the Peace Corps sent small business development volunteers to Uzbekistan despite the fact it had not developed a business program. In Poland, the small business program designed and implemented in 1991 was not approved until 1994. Peace Corps management officials told us that some newer programs required greater management support from Washington than others, and in February 1993, Washington staff began playing a direct role in managing certain problem programs in the former Soviet Union. They said actions taken included delaying the entry of volunteers into some programs to give staff more time to prepare; making staff changes; and instituting initiatives to strengthen training, programming, and staff support. Peace Corps policy manuals require that programs be sufficiently staffed in order to properly plan volunteer assignments and support volunteers at their sites. However, the Peace Corps did not always provide adequate numbers of staff to open new posts and did not assign sufficient staff to countries once the programs were underway. Compounding the problems caused by inadequate staffing was the short lead time the Peace Corps had to prepare for the arrival of the large number of volunteers assigned to the region. The Peace Corps’ recruitment of staff for these new country entries was reactive. The Peace Corps’ recruiting efforts largely consisted of sending announcements of vacancies to a few publications and foreign affairs associations. The Peace Corps also relied on former volunteers and staff from other countries to fill its staff positions. The quality of the staff was uneven in Central and Eastern Europe and the former Soviet Union countries. The Peace Corps often assigned staff that had prior Peace Corps experience but did not have necessary language skills. Also, some staff and consultants lacked the necessary cultural knowledge and technical skills. The Peace Corps’ staff training was also inadequate. Many of the staff we interviewed said they did not receive any training until after they started their assignments. In addition, staff we spoke with said that what training they received after they started their assignments was too general in nature and failed to prepare them for the particular challenges of their posts. Many of the staff we met told us they had little knowledge of the local language and culture before they arrived, which they said significantly hindered their effectiveness. Peace Corps staffing data indicates a pattern of shortages and turnover throughout the region, as illustrated in the following examples: Country directors resigned or were terminated within the first year in 3 of 4 countries we visited and in 9 of 18 country programs in the region. The Bulgaria program had four country directors and one acting director in a 20-month span. Three of the four countries we visited did not have an Associate Peace Corps Director (APCD) for their small business programs until after the volunteers arrived in country. In Poland, the small business APCD arrived 18 months after the first business volunteers arrived. In Bulgaria and Uzbekistan, there was no small business APCD for 6 months or more while volunteers were in the field. In Poland, the first APCD for the TEFL program was responsible for developing assignments for 60 volunteers, when the normal staff ratio is one APCD for approximately 30 volunteers. At the time of our fieldwork, many country programs in the region had other staff vacancies, including positions in Russia and Uzbekistan that had been vacant for over a year. The Peace Corps gave several reasons for having insufficient staff. First, the Peace Corps had already reached its overall staff ceiling established by the Office of Management and Budget. Second, in some instances the State Department restricted the number of U.S. personnel allowed into a country. For example, the Peace Corps was restricted to managing its three Baltic programs—Estonia, Latvia, and Lithuania—from a central office in Latvia. Third, the Peace Corps had difficulty attracting qualified candidates to fill a number of its staff positions. Peace Corps officials attributed the high staff turnover to two factors. First, the Peace Corps did not have enough lead time to recruit, prepare, and place staff in the field before the volunteers arrived. Once staff arrived, they had to accomplish too many tasks in a short amount of time, which led to frustration and burnout. Second, some staff were not a good match for their assignments and lacked the necessary skills and temperaments for the job. The Peace Corps did not provide adequate assignment programming and other support for volunteers in the countries we visited. In many cases, volunteer sites were not visited, assignments were ill-defined, and host-country sponsors were not identified. Host country officials were often uncertain what the Peace Corps’ goals and philosophy were, what volunteers had to offer, and what the Peace Corps expected of host country officials. In addition, sponsors did not provide what they committed to provide, such as housing, office space, and counterparts, because they lacked a clear understanding of their roles and had no written agreements. These problems eventually led many volunteers to change their assignments or leave the Peace Corps early. Designing adequate assignments for its volunteers has been a long-standing Peace Corps problem. In 1990, we reported that, worldwide, many volunteers had no positions or were underemployed, were forced to develop their own assignments, or did not receive host government support—problems we first reported in 1979. We recommended in 1990 that the Peace Corps establish procedures to improve the planning and development of volunteer assignments and projects. In response to our recommendation, the Peace Corps developed the PATS manual to improve its programming efforts. The manual states that all sites are to be visited and surveyed and that the roles and expectations of the local people should be clarified 3 to 6 months before volunteers arrive for training. The scope of this review did not include a worldwide evaluation to determine whether the Peace Corps’ actions corrected the assignment problems in other areas; however, site identification and development problems persisted in each of the four countries we visited. In Poland, over one-half of the first small business volunteers were moved to new assignments because of insufficient staff work on site placements and project design. These assignment problems have persisted, as some small business volunteers of subsequent groups have had difficulties finding meaningful positions. Volunteers assigned to work in the environmental sector were largely unemployed because the Peace Corps had not developed project plans that were accepted by Polish officials. Volunteers assigned to teach English in secondary schools told us that their schools had large numbers of skilled English teachers and that it was hard to justify their continued presence in the schools. In Bulgaria, half of the first class of small business volunteers left early due to frustrations over their assignments. The centerpiece of the Peace Corps’ small business program was to be the creation of regional resource centers where volunteers would provide information and advice to local businesses. However, the centers lacked local sponsorship and an independent funding source. This left the volunteers unsupported and forced them into fund-raising activities. According to the volunteers, office equipment and supplies needed to set up the centers did not arrive until some volunteers were already halfway into their 2-year assignment. Over 25 percent of the first TEFL volunteers had to be reassigned because sponsors had failed to provide them adequate housing or teaching positions. Although generally positive about their experience, many of the TEFL volunteers we spoke with questioned their placements, since their schools had large numbers of capable English language teachers. The Peace Corps also experienced some of the same difficulties in Russia that we saw in other countries. The main problem in Russia was a lack of local government officials’ understanding of and commitment to the program and the Peace Corps’ inability to provide volunteers with business equipment and other support. These factors, coupled with frustrations over undefined assignments and lack of housing, contributed to the departure of 30 percent of the volunteers within the first year. Local officials expected the Peace Corps to staff and equip sophisticated business centers, speak Russian proficiently, and attract joint ventures. When these expectations did not materialize, their support for the volunteers declined. Nonetheless, according to the Peace Corps, local officials continue to request more volunteers. Of the four countries’ programs we reviewed, Uzbekistan’s program experienced the most difficulties. Half of the volunteers left the program within their first year of service, and of the volunteers that remained, over half had their sites changed due to harassment by the local population, the lack of viable assignments, or the failure of sponsors to follow through with commitments to provide housing. Many volunteers were sent to sites that were not visited by Peace Corps staff. The Peace Corps failed to design a business program, and the business volunteers were thus forced to develop their own assignments. The TEFL volunteers were sent to their sites in March—near the end of the school year—and had to wait until September to start their teaching assignments. The TEFL volunteers’ situation was made worse when the preservice training instructor quit and was not replaced. Some of the volunteers told us they were struggling because they lacked the necessary technical training and experience to be in a classroom. Many of the volunteers told us they had made little impact because much of their time was spent finding a meaningful assignment or adequate housing. The size of a program also affected assignment programming. The Peace Corps’ rule of thumb for programs is that each APCD should manage about 30 volunteers. The Poland and Hungary programs started with a ratio of one APCD to 60 volunteers. Overall, the programs in Central Europe and the former Soviet Union averaged over 35 volunteers in their first year. The Peace Corps’ procedures call for the training of volunteers so that they can effectively carry out their assignments. The Peace Corps is expected to provide information to volunteers before their departure and intensive preservice training after they arrive in the country. This training is supposed to help volunteers serve and work effectively and has four components: language, technical, cross-cultural, and personal health and safety. Language training is to provide volunteers with reasonable proficiency to function effectively in their assignments. The technical training strategy is to teach job skills within a cultural context in conjunction with language and social customs. Many volunteers said that their language training did not prepare them for their assignments. The languages of the region are difficult to learn, so the Peace Corps officials said they focused on improving language training in the region. Nonetheless, most business and environment volunteers we interviewed said that their language skills were not sufficient to perform their jobs and the language training lacked job-related terminology. As a result, to perform their work, many of them were relying on interpreters. Some of the volunteers we spoke with in Uzbekistan were trained to speak Russian and Uzbek but were assigned to cities where the Tajik language is predominant. TEFL volunteers fared better because they were expected to speak English and did not have to rely on their language skills to function in their assignments. A common theme struck by the small business volunteers we spoke with throughout the region was that their technical training had little relevance to their assignments. The Peace Corps trainers taught basic U.S. business practices, which were of little use to many volunteers who already had degrees in business, accounting, and law and years of practical business experience. These volunteers said they needed to know how to adapt their expertise to local situations, but their trainers had no knowledge or appreciation of local conditions. Some of the TEFL volunteers we spoke with told us that their technical training did not prepare them for their teaching assignments. The volunteers we spoke with told us the cross-cultural training they received generally prepared them for living and working in a new culture. However, the volunteers in rural and small urban areas in Uzbekistan told us that they were totally unprepared for the physical and verbal harassment westerners, especially women, received. Many women volunteers in rural and small urban areas in Uzbekistan were targets of physical and verbal assaults, including beatings, fondling, and rock throwing. As a result several volunteers left early. The remaining women volunteers were relocated to larger, safer cities. The Peace Corps had trouble providing support to volunteers once they were at their sites. The main causes for the lack of support to volunteers were the shortage and turnover of staff and the lack of adequate resources. The unsettled staffing situation pressed Peace Corps missions to operate in a crisis-response mode. This crisis mode did not permit adequate time for dealing with volunteer issues in the field. Volunteers we spoke with told us it was generally up to them to solve any problems related to their assignments or living situations. Despite the programming problems and the lack of preparation and support, many volunteers told us that they were often able to find meaningful work on their own initiative and generally believed they were making some positive impact. Also, according to several U.S. assistance and private voluntary organization officials, volunteers are a low-cost means to provide assistance to the region, and host country officials appreciate Peace Corps support. Various officials said the region needs the long-term technical assistance the Peace Corps provides. Officials of other assistance agencies told us that Peace Corps volunteers generally worked well with them. Since the Peace Corps has volunteers at the grassroots level, the U.S. Agency for International Development, the U.S. Information Agency, the U.S. and Foreign Commercial Service, and U.S.-funded private voluntary organizations, among others, often relied on volunteers to provide advice and identify suitable development projects, exchange students, and business ventures. Top Peace Corps officials acknowledged that the agency had difficulties introducing programs in Central and Eastern Europe and the former Soviet Union and told us they are taking steps to address them. They said they are taking precautionary measures to ensure better planning and preparation for future programs and actions to address problems in existing programs. According to Peace Corps officials, the schedule for introducing programs into the region was overly ambitious, both in terms of time to adequately develop the programs and the funding and staff resources to support them. They said that future programs would be more thoroughly planned before their introduction and better supported when introduced. They also said additional emphasis would be placed on developing individual volunteer assignments and volunteer support programs. In conjunction with this increased emphasis, the Peace Corps’ office of Europe, Central Asia, and Mediterranean (ECAM) operations, which is responsible for managing country programs in the former Eastern bloc, recently clarified its planning, review, and approval processes and made them policy. ECAM also plans to request input from technical advisors when designing new volunteer projects and will develop program plans prior to sending volunteers to a country. The Department of State provided the Peace Corps fiscal year 1994 supplemental funding, which was being used to stabilize new country programs in the region. The funds were used to contract for additional consultants to help strengthen ongoing programs, among other things. The funds will also be used to place more staff in programs in the region. In addition, a recently completed Peace Corps evaluation recommended improvements in staff hiring and support practices, and a special recruitment effort was underway at the time of our review to increase the pool of small business staff candidates. The Peace Corps was also revising and testing its overseas staff development training curriculum and expanding staff training in the field. Officials said the revised curriculum would be fully developed and operational by April 1995. In addition, ECAM has hired additional staff to increase the time volunteers devote to language training, is developing additional language materials, and is making technical training more specific to the country. The Peace Corps’ entry into the former Eastern bloc did not appear to adversely affect staffing and financial resources for programs in the African, Asian and Pacific, and Inter-American regions. During fiscal years 1990-94, the Peace Corps received incremental budget increases to facilitate the start-up of new programs. In addition, in fiscal year 1994, the Department of State transferred $12.5 million to the Peace Corps to develop and stabilize its new programs in the former Soviet Union. For fiscal year 1995, the Peace Corps has requested $11.6 million from the State Department for these programs. Table 1 shows the Peace Corps funding for fiscal years 1989-95. According to the Conference Report on the fiscal year 1995 appropriations act, the Congress expects that the State Department will transfer funds to the Peace Corps to cover the full cost of its fiscal year 1995 operations in the newly independent states of the former Soviet Union. During expansion into the 18 countries in Central and Eastern Europe and the former Soviet Union, the Peace Corps also started 20 additional programs in the rest of the world and closed or suspended 10 programs, for a net increase of 28 country programs. From fiscal year 1989 through 1993, Peace Corps direct-hire staff increased by 10 percent, from 1,071 to 1,183. Our review of staffing allocations indicates that the African and Inter-American regions received staffing increases of 4 and 9 percent, respectively, during this period, and staffing in the Asian and Pacific region decreased by 10 percent. According to the Peace Corps, if the new Europe, Central Asia, and Mediterranean region and its new posts were excluded, the number of fully active posts would have increased from 52 to 66, a 27-percent increase, and the direct hire staff equivalent would have increased by 18, a 3.4-percent increase. The Peace Corps’ Washington staff levels remained relatively constant during this period. As the number of Peace Corps programs increased during the period, the average number of volunteers serving in countries worldwide decreased. From 1989 through 1993 the total number of volunteers increased from 5,185 to 5,351 (approximately 3 percent). Thus, with the net addition of 28 new programs, the Peace Corps added 166 volunteers. During this period, the average ratio of volunteers to country programs decreased from 80 to 57. (Twelve of the new programs did not begin until fiscal year 1993. Because the Peace Corps’ policy is generally to phase in the agreed-upon contingent of volunteers over a 2-year period, 11 of the 12 programs had only half their volunteer contingents in place in 1993. (See table 2.) Peace Corps officials attributed the reduction in the average number of volunteers per country to factors other than the initiation of programs in the former Eastern bloc. For example, programmatic assessments made prior to 1990 had already suggested reductions of over 250 volunteers in Central America and the Caribbean (Belize, Costa Rica, Guatemala, Honduras, Jamaica, and Haiti). Also, 600 positions became available with the closure of three large programs (Liberia, the Philippines, and Zaire) and the temporary suspension of 11 other programs for safety and security reasons. Notwithstanding the Peace Corps’ earlier development of the PATS manual and its current initiatives, we recommend that the Director of the Peace Corps ensure that the written procedures are followed so that (1) program plans are well-developed, (2) volunteers have received adequate preservice training, and (3) viable assignments are in place before volunteers arrive. In commenting on a draft of this project, the Peace Corps stated that its programs in Central and Eastern Europe and the states of the former Soviet Union have been a difficult challenge. The agency indicated that some problems were attributable to unique circumstances in this region, but acknowledged that it had brought some problems on itself. The agency’s comments, which are reprinted in their entirety in appendix I, discuss the steps the Peace Corps has taken recently in an effort to improve programming, training, and staffing in the region. We conducted our review at the Peace Corps’ headquarters in Washington, D.C., and in Poland, Bulgaria, Russia, and Uzbekistan. To assess the Peace Corps’ new country entry processes, coordination, and volunteer assignment and support issues, we reviewed current and historical records and interviewed numerous Peace Corps officials, including former officials who were primarily responsible for opening new programs in the region. We reviewed Peace Corps manuals and policy documents and analyzed budget, staffing, and volunteer data. We also met with officials from various U.S. agencies responsible for coordinating assistance to the region, including the Department of State, the Agency for International Development, the U.S. Information Agency, and the Office of Management and Budget. We selected the Poland, Bulgaria, Russia, and Uzbekistan programs on the bases of their differing sizes, dates of introduction, and geographical and cultural diversity, and because of the countries’ differing stages of development. Poland was one of the first programs in the region, and the largest. Bulgaria was a smaller program, introduced after Poland. Russia was the largest program in the former Soviet Union. Uzbekistan was a later entry, and representative of entries into central Asia. The four countries were selected in consultation with the Peace Corps. In each of the four countries we visited, we obtained pertinent documents and interviewed Peace Corps staff, U.S. embassy officials, and representatives of private voluntary organizations that worked with volunteers. In each country, we interviewed a large number of Peace Corps volunteers at their sites. We also visited several volunteers’ project sites and interviewed the host-country people with whom the volunteers lived and worked. To determine whether the Peace Corps’ expansion into the former Eastern bloc came at the expense of other regions’ programs, we examined budget and staffing data and spoke with senior Peace Corps officials responsible for managing those programs. However, we did not conduct work in the other regions. We conducted our review between September 1993 and July 1994 in accordance with generally accepted government auditing standards. We plan no further distribution of this report until 30 days after its issue date, unless you publicly announce its contents earlier. At that time, we will send copies to the Director of the Peace Corps, the Secretary of State, the Administrator of the Agency for International Development, and the Director of the Office of Management and Budget. Copies will also be made available to other interested parties upon request. If you or your staffs have any questions about this report, please call me on (202) 512-4128. Major contributors to this report were David R. Martin, Patrick A. Dickriede, Edward D. Kennedy, and Peter J. Bylsma. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the Peace Corps' processes and procedures for starting programs in Central and Eastern Europe and the states of the former Soviet Union, focusing on: (1) the adequacy of the Peace Corps' planning and staffing procedures; (2) whether the Peace Corps provided volunteers with adequate assignments, training, and other support; and (3) whether the expansion into former Eastern bloc countries came at the expense of other regional programs. GAO found that: (1) although the Peace Corps has comprehensive, sound written procedures for planning and implementing new programs and preparing volunteers, the Peace Corps did not follow normal procedures in its haste to start programs in former Eastern bloc countries; (2) serious difficulties due to poor design and inadequate volunteer guidance, training, and support limited the new programs' effectiveness in these countries and led to high volunteer turnover; (3) despite these problems, many volunteers believed that they had a positive impact on the people they served; (4) it is too soon to tell if the Peace Corps' actions to correct problems in the Eastern bloc programs will be effective; and (5) other regions' funding and staffing have not been affected by the new programs. |
As part of our audit of the fiscal years 2007 and 2006 CFS, we evaluated the federal government’s financial reporting procedures and related internal control, and we followed up on the status of corrective actions taken by Treasury and OMB to address open recommendations relating to the processes used to prepare the CFS that were in our previous reports. In our audit report on the fiscal year 2007 CFS, which is included in the fiscal year 2007 Financial Report of the United States Government, we discussed the material weaknesses related to the federal government’s processes used to prepare the CFS. These material weaknesses contributed to our disclaimer of opinion on the accrual basis consolidated financial statements and also contributed to our adverse opinion on internal control. We performed sufficient audit procedures to provide the disclaimer of opinion on the accrual basis consolidated financial statements in accordance with U.S. generally accepted government auditing standards. This report provides the details of the material weaknesses we identified in performing our fiscal year 2007 audit procedures related to the processes used to prepare the CFS and our recommendations to correct these weaknesses, as well as the status of corrective actions taken by Treasury and OMB to address recommendations in our previous reports. We requested comments on a draft of this report from the Director of OMB and the Secretary of the Treasury or their designees. OMB provided oral comments, which are described in the Agency Comments section of this report. Treasury’s comments are reprinted in appendix II and are also described in the Agency Comments section. Over the past several years, Treasury has developed and documented numerous standard operating procedures (SOP) for preparing the CFS, which have substantially addressed GAO’s recommendation for Treasury to develop and document policies and procedures for preparing the CFS. However, one of Treasury’s SOPs entitled “Standard Operating Procedures for Preparing the Financial Report of the U.S. Government” is incomplete. For example, certain steps Treasury performs to prepare the CFS are not documented in this SOP and, for the key practices that are documented, the SOP is unclear as to who is responsible for performing the procedures. In connection with its role as preparer of the CFS, Treasury management is responsible for developing and documenting detailed policies, procedures, and practices for preparing the CFS and ensuring that internal control is built into and is an integral part of the related process. GAO’s Standards for Internal Control in the Federal Government calls for clear documentation of policies and procedures. Without adequately documented policies and procedures, standards and practices may not be consistently followed or followed at all. This potential for inconsistency increases the risk that errors in the compilation process could go undetected and could result in an incomplete and inaccurate summarization of data within the CFS. We recommend that the Secretary of the Treasury direct the Fiscal Assistant Secretary to enhance and fully document all practices referred to in the SOP entitled “Standard Operating Procedures for Preparing the Financial Report of the U.S. Government” to better ensure that practices are proper, complete, and can be consistently applied by staff members. For many years, we have reported that Treasury had not established a formal process to ensure that the financial statements, related notes, stewardship information and supplemental information in the CFS were presented in conformity with GAAP. Over the past several years, Treasury has developed a formal process that has significantly improved its ability to timely identify GAAP requirements, modify its closing package requirements to obtain information needed, assess the effect of omitted disclosures, and document decisions reached and the rationale for such decisions. However, there continue to be some instances where disclosures are not presented in conformity with GAAP. A contributing factor to the continued instances of nonconformity with GAAP is that the process Treasury developed to compile the CFS does not include adequately documenting its (1) timely assessment of the relevance, usefulness, or materiality of information reported by the federal agencies for use at the governmentwide level, (2) consideration of relevant accounting standards other than those issued by FASAB, and (3) final decisions regarding the inclusion or exclusion of federal agencies’ disclosure information in the existing notes to the CFS. As part of the process Treasury developed, it created a checklist containing FASAB requirements for use as a tool to help determine if disclosures in the CFS are in conformity with GAAP. Due to the way the checklist was designed, Treasury primarily used it as a planning tool to ensure that it requested in the closing package the data that Treasury would need from federal agencies to report in compliance with GAAP. Although this is a useful and important first step, we found that Treasury’s checklist was limited by its design and was not used by staff to help ensure that the published CFS was in conformity with GAAP in all material respects. As a result, the checklist did not adequately assist Treasury in ensuring that all GAAP required disclosures were adequately disclosed in the CFS or documenting why certain disclosures were excluded. We recommend that the Secretary of the Treasury direct the Fiscal Assistant Secretary to enhance its checklist or design an alternative and use it to adequately and timely document Treasury’s (1) assessment of the relevance, usefulness, or materiality of information reported by the federal agencies for use at the governmentwide level; (2) consideration of relevant accounting standards other than those issued by FASAB; and (3) final decisions regarding the inclusion or exclusion of federal agencies’ disclosure information in the existing notes to the CFS. The federal government reports a unified budget deficit (budget deficit) in the Reconciliation of Net Operating Cost and the Unified Budget Deficit and in the Statement of Changes in Cash Balance from Unified Budget and Other Activities. The budget deficit is calculated by subtracting actual budget outlays from actual budget receipts. Budget outlays consist of federal agencies’ outlay amounts, that is, gross outlays net of offsetting collections and distributed offsetting receipts at the agency level, and undistributed offsetting receipts at the governmentwide level. Federal agencies also report net outlays in their SBRs. Both the net outlays as a component of the budget deficit reported in the CFS and as reported in the federal agencies’ SBRs should generally match the budget outlays reported in the Budget of the United States Government. For several years, we have reported material unreconciled differences between the total net outlays reported in selected federal agencies’ SBRs and Treasury’s central accounting records used to compute the budget deficit reported in the CFS. OMB and Treasury have continued to work with federal agencies to reduce these material unreconciled differences. However, in fiscal year 2007, billions of dollars of unreconciled differences still existed in this and other components of the budget deficit. One way OMB has been working with federal agencies has been to require the agencies, beginning with the first quarter in fiscal year 2007, to submit to OMB an analysis and reconciliation, based on certain criteria, of any material differences between the federal agency’s quarterly unaudited SBR and the agency’s related quarterly Standard Form (SF) 133 Report on Budget Execution and Budgetary Resources (SBR to SF 133 reconciliations). Agencies’ SF 133s are submitted to Treasury and serve as the main source for the CFS budget reporting and reconciliation. Material unreconciled differences remained at the end of fiscal year 2007 between the agencies’ SBRs and their related SF 133s. OMB conducted further analysis on the agencies’ quarterly SBR to SF 133 reconciliations and determined that many of these differences related to the recording of distributed offsetting receipts. Although distributed offsetting receipts are included in the net outlay calculation in federal agencies’ SBRs, as well as in the computation of the budget deficit in the CFS, they are not included as part of the SF 133s, and as such are not being identified and addressed by the agencies in the quarterly reconciliation process. OMB is aware that the reporting of distributed offsetting receipts contributes to many of the material differences in net outlays and is currently determining how to reconcile distributed offsetting receipts included in the net outlay calculation of federal agencies’ SBRs and the amounts included in the computation of the budget deficit in the CFS. Until the federal government has effective processes and procedures in place for identifying and resolving material differences between the total net outlays reported in federal agencies’ SBRs and the records used to prepare the CFS, the actual extent of such differences and their effect on the CFS will be unknown. We recommend that the Director of OMB direct the Controller of OMB’s Office of Federal Financial Management, in coordination with Treasury’s Fiscal Assistant Secretary, to develop formal processes and procedures for identifying and resolving any material differences in distributed offsetting receipt amounts included in the net outlay calculation of federal agencies’ SBRs and the amounts included in the computation of the budget deficit in the CFS. Treasury developed the Governmentwide Financial Report System (GFRS) to collect federal agencies’ audited financial statement information to prepare the CFS. Federal agencies enter their audited financial information into GFRS, and Treasury exports the data into a database and then into various spreadsheets in order to compile the CFS. Treasury did not maintain adequate control over the spreadsheets used to summarize and array financial data for presentation in the CFS. Specifically, Treasury’s processes and procedures for management and control of the spreadsheets were largely undocumented. In addition, Treasury had not established adequate controls to ensure that certain key spreadsheets were (1) protected from inadvertent change and (2) documented to facilitate detection and tracking of changes to key formulas and data. Further, the column headings within many spreadsheets were either not labeled or the labels were not aligned with the data contained in the column. GAO’s Standards for Internal Control in the Federal Government calls for controls to be in place to safeguard financial information and help reduce the risk of errors, misuse, or unauthorized alteration. In addition, vendor documentation also provides guidance on maintaining and protecting spreadsheet integrity. OMB Circular No. A-127 requires that appropriate internal control be applied to all financial management system inputs, processing, and output. It also requires that financial management systems and associated instructions for maintenance and use be clearly documented in sufficient detail to permit an individual with appropriate background knowledge to obtain a comprehensive understanding of the entire operation of the system. Inadequate spreadsheet controls increase Treasury’s risk that its financial reporting data will be inaccurate, and that these inaccuracies will not be prevented or detected in a timely manner. We recommend that the Secretary of the Treasury direct the Fiscal Assistant Secretary to establish effective internal control to ensure the spreadsheets used to compile the CFS are (1) protected from inadvertent change and (2) documented to facilitate detection and tracking of changes to key formulas and data. Further, we recommend that columns within key spreadsheets be labeled and properly aligned to reflect the data contained within. Treasury, in coordination with OMB, has not established processes for monitoring and assessing the effectiveness of internal control over the processes used to prepare the CFS. According to OMB Circular No. A-123, management has a fundamental responsibility to develop and maintain effective internal control. Effective internal control provides reasonable assurance that significant weaknesses in the design or operation of internal control, that could adversely affect the entity’s ability to meet its objectives, would be prevented or detected in a timely manner. In addition, periodic reviews, reconciliations, or comparisons of data should be included as part of the regular assigned duties of personnel. Periodic assessments should be integrated as part of management’s continuous monitoring of internal control, which should be ingrained in the entity’s operations. If an effective continuous monitoring program is in place, it can leverage the resources needed to maintain effective internal controls throughout the year. In addition, GAO’s Standards for Internal Control in the Federal Government states that internal control is a major part of managing an organization and should include monitoring. Without effective monitoring and assessment of internal control, there is a risk that errors in the compilation process could go undetected and could result in an incomplete and inaccurate summarization of data within the CFS. We recommend that the Secretary of the Treasury direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB’s Office of Federal Financial Management, to develop and implement effective processes for monitoring and assessing the effectiveness of internal control over the processes used to prepare the CFS. As indicated in our most recent audit report on the CFS, and since fiscal year 2003, there have been limitations on the scope of our work that contribute to our disclaimer of opinion on the accrual basis consolidated financial statements. Since fiscal year 2003, Treasury and OMB began accelerating the time frame for preparation of the CFS. Consequently, GAO in turn has accelerated the time frame to issue our reports on the audits of the CFS. For fiscal year 2007, we reported that Treasury was unable to provide the final accrual basis consolidated financial statements and certain supporting documentation in time for us to complete all of our planned auditing procedures related to the compilation of these financial statements. We also reported that personnel at Treasury’s Financial Management Service had excessive workloads that required an extraordinary amount of effort and dedication to compile the CFS and that quarterly compilations were not performed at the governmentwide level. This leads to almost all the compilation effort being performed during a condensed time period at the end of the year. Federal agencies are required to produce unaudited quarterly financial statements and remit them to OMB; however, Treasury does not use these quarterly financial statements or request any other interim financial information that would enable it to perform some of the compilation effort before the end of the year. For example, if a federal agency changed the manner in which it was reporting certain information in its financial statements, by obtaining and utilizing the agency’s quarterly financial statements, Treasury would be aware of this change and could evaluate any effect this might have on the CFS during the year rather than during the condensed time period at the end of the year. Until such time that interim financial information is obtained and utilized in some capacity to assist Treasury in overcoming the existing resource and time constraints, we believe that Treasury will continue to face significant challenges in being able to provide accrual basis consolidated financial statements and supporting documentation in time for us to complete our planned auditing procedures. We recommend that the Secretary of the Treasury direct the Fiscal Assistant Secretary, working in coordination with the Controller of OMB’s Office of Federal Financial Management, to develop and implement alternative solutions to performing almost all of the compilation effort at the end of the year, including obtaining and utilizing interim financial information from federal agencies. In oral comments on a draft of this report, OMB stated that it generally agreed with the new findings and related recommendations in this report. In written comments on a draft of this report, which are reprinted in appendix II, Treasury stated that it agrees with the new findings and related recommendations. This report contains recommendations to the Secretary of the Treasury and the Director of OMB. The head of a federal agency is required by 31 U.S.C. § 720 to submit a written statement on actions taken on these recommendations. You should submit your statement to the Senate Committee on Homeland Security and Governmental Affairs and the House Committee on Oversight and Government Reform within 60 days of the date of this report. A written statement must also be sent to the House and Senate Committees on Appropriations with the agency’s first request for appropriations made more than 60 days after the date of the report. We are sending copies of this report to the Chairmen and Ranking Members of the Senate Committee on Homeland Security and Governmental Affairs; the Subcommittee on Federal Financial Management, Government Information, Federal Services, and International Security, Senate Committee on Homeland Security and Governmental Affairs; the House Committee on Oversight and Government Reform; and the Subcommittee on Government Management, Organization, and Procurement, House Committee on Oversight and Government Reform. In addition, we are sending copies to the Fiscal Assistant Secretary of the Treasury, the Deputy Director for Management of OMB, and the Acting Controller of OMB’s Office of Federal Financial Management. Copies will be made available to others upon request. This report is also available at no charge on GAO’s Web site at http://www.gao.gov. We acknowledge and appreciate the cooperation and assistance provided by Treasury and OMB during our audit. If you or your staff have any questions or wish to discuss this report, please contact me on (202) 512- 3406 or [email protected]. Key contributors to this report are listed in appendix III. This appendix includes recommendations that were open at the beginning of our fiscal year 2007 audit from five of our previous reports: Financial Audit: Process for Preparing the Consolidated Financial Statements of the U.S. Government Needs Improvement, GAO-04-45 (Washington, D.C.: Oct. 30, 2003); Financial Audit: Process for Preparing the Consolidated Financial Statements of the U.S. Government Needs Further Improvement, GAO-04- 866 (Washington, D.C.: Sept. 10, 2004); Financial Audit: Process for Preparing the Consolidated Financial Statements of the U.S. Government Continues to Need Improvement, GAO-05-407 (Washington, D.C.: May 4, 2005); Financial Audit: Significant Internal Control Weaknesses Remain in Preparing the Consolidated Financial Statements of the U.S. Government, GAO-06-415 (Washington, D.C.: Apr. 21, 2006); and Financial Audit: Significant Internal Control Weaknesses Remain in the Preparation of the Consolidated Financial Statements of the U.S. Government, GAO-07-805 (Washington, D.C.: July 23, 2007). Recommendations that were closed in prior reports are not included in this appendix. This appendix includes the status of the recommendations according to the Department of the Treasury (Treasury) and the Office of Management and Budget (OMB) as well as our own assessments. Explanations are included in the status of recommendations per GAO when Treasury and OMB disagreed with our recommendation or the status of a recommendation. Of the 81 recommendations relating to the processes used to prepare the consolidated financial statements of the U.S. government (CFS) that are listed in this appendix, 35 were closed and 46 remained open as of December 10, 2007, the date of our report on the audit of the fiscal year 2007 CFS. In addition to the above contact, the following individuals made key contributions to this report: Lynda Downing, Assistant Director; Mickie Gray; David Hayes; Sharon Kittrell; Dragan Matic; Maria Morton; and Taya Tasse. | For the past 11 years, since GAO's first audit of the consolidated financial statements of the U.S. government (CFS), certain material weaknesses in internal control and in selected accounting and financial reporting practices have prevented GAO from expressing an opinion on the CFS. GAO has consistently reported that the U.S. government did not have adequate systems, controls, and procedures to properly prepare the CFS. GAO's December 2007 disclaimer of opinion on the fiscal year 2007 accrual basis consolidated financial statements included a discussion of continuing control deficiencies related to the preparation of the CFS. The purpose of this report is to (1) provide details of continuing material weaknesses, (2) recommend improvements, and (3) provide the status of corrective actions taken to address the 81 open recommendations related to the preparation of the CFS that GAO reported in July 2007. GAO identified continuing and new control deficiencies during its audit of the fiscal year 2007 CFS that relate to the federal government's processes used to prepare the CFS. These control deficiencies contribute to material weaknesses in internal control regarding the U.S. government's inability to (1) adequately account for and reconcile intragovernmental activity and balances between federal agencies; (2) ensure that the CFS was consistent with the underlying audited agency financial statements, properly balanced, and in conformity with U.S. generally accepted accounting principles; and (3) identify and either resolve or explain material differences that exist between certain components of the budget deficit reported in the Department of the Treasury's records, used to prepare the Reconciliation of Net Operating Cost and Unified Budget Deficit and Statement of Changes in Cash Balance from Unified Budget and Other Activities, and related amounts reported in federal agencies' financial statements and underlying financial information and records. The control deficiencies GAO identified during its tests of the processes used to prepare the fiscal year 2007 CFS involved the following areas: documenting a key standard operating procedure for preparing the CFS, reporting in conformity with U.S. generally accepted accounting principles, reconciling distributed offsetting receipts, maintaining adequate control over spreadsheets used in preparing the CFS, monitoring internal control over the processes used to prepare the CFS, using interim financial information in the CFS preparation process, and various other control deficiencies that were identified in previous years' audits but remained in fiscal year 2007. Of the 81 open recommendations GAO reported in July 2007 regarding the processes used to prepare the CFS, 35 were closed and 46 remained open as of December 10, 2007, the date of our report on our audit of the fiscal year 2007 CFS. GAO will continue to monitor the status of corrective actions taken to address the 10 new recommendations and the new remaining balance of 56 open recommendations during its fiscal year 2008 audit of the CFS. |
As we reported in June 2015, in fiscal year 2013, 332,934 veterans received TDIU benefits, an increase of 22 percent since fiscal year 2009. Overall, TDIU beneficiaries make up a substantial portion (45 percent) of the group of all veterans who receive benefit payments at the 100 percent disability compensation rate. This population of TDIU beneficiaries increased in each of the 4 years we compared to the following year. Moreover, the number of older beneficiaries (aged 65 and older) increased for each of the years we examined and by fiscal year 2013, they represented the majority (54 percent) of the TDIU population—a 73 percent increase from fiscal year 2009. Further, of these older beneficiaries, 56,578 were 75 years of age and older in fiscal year 2013 while 10,567 were 90 years of age and older. The increase in beneficiaries over age 65 was mostly attributed to new beneficiaries who were receiving the benefit for the first time as shown in figure 1. Between 2009 and 2013, the number of new older beneficiaries more than doubled to 13,259. Of these new older beneficiaries, 2,801 were aged 75 and over while 408 were 90 and over. We estimated that, in fiscal year 2013, the TDIU benefit was a $5.2 billion supplemental payment above what beneficiaries would have received in the absence of TDIU benefits. Although VA does not track the overall costs of TDIU benefits, we used disability compensation payment rate information, data on the TDIU beneficiary population, and data on the population of all new beneficiaries to calculate this estimate. In our June 2015 report, we found that VBA’s guidance, quality assurance approach, and income verification procedures do not ensure that TDIU decisions are well supported. Specifically, we identified the following challenges in decision-making procedures: Incomplete guidance on how to determine unemployability: VBA provides guidance to rating specialists to help them determine if veterans meet the eligibility requirements for TDIU benefits. This guidance tasks rating specialists, based upon the evidence at hand, to determine veterans’ unemployability; it also recognizes that the process is subjective and involves professional interpretation. However, the guidance provided by VBA on which factors to consider when determining if a veteran is “unemployable” is incomplete in three ways, creating potential variation in TDIU claim decisions. First, rating specialists in some (5 of 11) of the discussion groups we held at five regional offices disagreed on whether they are permitted to consider additional factors not specifically mentioned in VBA’s guidance such as, enrollment in school, education level, or prior work history when assessing an applicant’s employability. For example, one rating specialist recently reviewed a claim for TDIU that was submitted by a veteran suffering from traumatic brain injury. The rating specialist found that the veteran was enrolled in school part time and earning A’s in engineering classes, which the specialist felt clearly demonstrated employability. However, another rating specialist within the group stated that the veteran’s enrollment in classes would not be part of her decision-making. Second, rating specialists noted that for those factors that rating specialists can consider in their decision-making process, such as whether the veteran receives Social Security Disability Insurance benefits, the guidance is silent on which, if any, should be given greater priority or weight. We confirmed that this information was not in the manual or guidance provided by VBA. Rating specialists in the majority (7 of 11) of the discussion groups specifically noted that they could come to an opposite decision when reviewing the same evidence due to the fact that they weighted certain factors differently. For example, a rating specialist told us that a medical opinion was always weighted more heavily than all other evidence in the veteran’s file while another specialist expressed a hesitancy to rely too much on the examiner’s opinion. Third, the guidance does not provide instruction on how to separate extraneous factors from allowable ones. Findings from our case file review illustrates this issue: One file described a 77- year-old veteran claiming TDIU benefits for blindness that was caused by (1) a service-connected disability, (2) glaucoma, and (3) macular degeneration. However, because all three conditions related to the veteran’s quality of vision, the rating specialist noted in the file her difficulty separating the effect of the service- connected disability from the non-service-connected glaucoma and macular degeneration due to the man’s age. In light of these challenges, in our June 2015 report, we recommended that VA instruct VBA to update the guidance to clarify how rating specialists should determine unemployability when making TDIU benefit decisions. This update could clarify if factors such as enrollment in school, education level, and prior work history should be used and if so, how to consider them, and whether to assign more weight to certain factors than others. VA concurred with this recommendation and stated that VBA will review and identify improvements to TDIU policies and procedures to provide clearer guidance including the extent to which age, education, work history, and enrollment in training programs are factors claims processors must address. VA anticipates that its Compensation Service will complete this review and provide options to VBA for a decision by the end of January 2016. Format and delivery of guidance is inefficient: Rating specialists in the majority (7 of 11) of our discussion groups at five regional offices reported that VBA’s guidance for reviewing TDIU claims is formatted and delivered in ways that make it difficult for them to efficiently complete their decision-making responsibilities. For example, TDIU guidance is delivered using multiple formats, including—but not limited to—manuals, policy and procedure letters, monthly bulletins, and e-mails. Thus, rating specialists lack a definitive source for TDIU benefit decision guidance. In addition, VBA officials acknowledged the manual for TDIU benefit decisions is outdated and stated they issue interim guidance in many forms between manual updates because such updates are time-consuming and difficult to do on a regular basis. VBA officials also told us they have completed two of the four stages for a web portal that will house all existing guidance and will subsequently consolidate the guidance into one processing manual, which they are in the process of rewriting. Officials told us they plan to complete the consolidation by the end of fiscal year 2015. Quality assurance approach may not be comprehensive: VBA’s quality assurance approach—accomplished mainly through its Systematic Technical Accuracy Review (STAR)—may not be providing a comprehensive assessment of TDIU claim decisions.Specifically, the agency’s current approach does not allow it to identify variations in these decisions or ascertain the root causes of variation that may exist. VBA’s quality assurance standards indicate that for the quality assurance officer to decide that the rating specialist made an error, it must be clear and undebatable; the officer cannot substitute his or her professional opinion with the opinion of the rating specialist who made the original decision. Because of this high standard, a STAR review of a sample of claims finalized during the first three quarters of fiscal year 2014 determined that nearly 95 percent of TDIU claims (872 of 920) were error-free. Of the 48 claims found to contain an error, all the errors were found to be “procedural,” such as an incorrect date for the onset of unemployability. No “decisional” errors—that is, an error on the decision to grant or deny the benefit— were found. According to VBA officials, it is unlikely that they will find many decisional errors because there is so much individual judgment allowed in TDIU claim decisions, and VBA’s quality assurance standards do not allow for the reevaluation of the professional opinion of the original rating specialist. While we recognize that TDIU benefit decisions have an inherently subjective component, in June 2015, we recommended that VA identify other quality assurance approaches to comprehensively assess TDIU benefit claim decisions. The approach should assess the completeness, accuracy and consistency of decisions and ascertain the root causes of any significant variation so that VBA can take corrective actions as appropriate. This effort could be informed by the approaches VBA uses to assess non-TDIU claims. For example, as we reported in 2014, VBA conducted a targeted review of military sexual trauma claims using a consistency questionnaire to test rating specialists’ understanding and interpretation of policies in response to concerns that related post- traumatic stress disorder claims were not being accurately decided. VA concurred with this recommendation and stated that quality assurance staff would add TDIU-specific questions to the In-Process Review checklist at the regional offices by September 2015. Based on the results of the reviews, VA stated that VBA will determine the most effective approach for assessing the accuracy and consistency of TDIU decisions. Self-Reported income eligibility information is not verified: VBA requires TDIU claimants and beneficiaries to provide information on their employment earnings, but it places the benefits at risk of being awarded to ineligible veterans by not using third-party data sources to independently verify self-reported earnings. To begin receiving and remain eligible for TDIU benefits, veterans must meet certain income eligibility requirements. Rating specialists use information provided by claimants to request additional information from employers and, when possible, verify the claimant’s reported income, especially for the year prior to applying for the benefit. However, VBA officials and our file review indicated that employers provide the requested information only about 50 percent of the time. If VBA does not receive verification from a veteran’s employer after multiple attempts, it accepts the veteran’s claimed earnings. VBA previously conducted audits of existing beneficiaries’ reported income by obtaining income verification matches from Internal Revenue Service (IRS) earnings data through an agreement with the Social Security Administration (SSA). However, the agency is no longer doing so despite having standing agreements with the IRS and SSA to do so. In 2012, VBA suspended income verification matches to allow for the development of a new system that would allow for more frequent, electronic information sharing. However, that system was never developed. To better ensure beneficiaries’ eligibility, in June 2015, we recommended VA instruct VBA to verify the self-reported income provided by veterans (1) applying for TDIU benefits and (2) undergoing the annual eligibility review process by comparing such information against IRS earnings data. VA concurred with this recommendation and stated that VBA is developing an upfront verification process including expanding the data sharing agreement with SSA, which enables VBA to receive federal tax information via an encrypted electronic transmission through a secure portal. VBA expects to implement this new process for TDIU claimants by January 2016. With regard to the options for revising TDIU eligibility requirements and the benefit structure, in our June 2015 report, we identified a number of options proposed by others as described in table 1. More specifically, six options focused on revising eligibility such as changing existing requirements in various ways, for example, setting age limits, lowering the disability rating requirement, or increasing the income threshold. A seventh option would affect the benefit structure by lowering—but not immediately eliminating—the TDIU benefit payments as beneficiaries earn income beyond the eligibility limit. Based on interviews with selected experts and representatives of veterans service organizations (VSO), we identified a range of potential strengths and challenges associated with each option. The experts and VSO representatives commonly mentioned the equity of the proposed change, an increase or decrease of VA’s management and administration efforts and cost, and the effect on veterans as potential strengths and challenges. For example, a couple of the options present possible opportunities for VA to better target TDIU benefits to veterans who are unemployable, but implementation of these options could pose challenges in ensuring that all veterans are treated equitably. Each of the seven options and the potential strengths and challenges identified by stakeholders that we interviewed are summarized in our report. In addition to these options, in its 2012 report, the Advisory Committee on Disability Compensation made recommendations to VA regarding potential revisions to the TDIU benefit, and while VA concurred with those recommendations, it has yet to take actions in response to them. Specifically, the committee recommended that the agency (1) study whether age should be considered when deciding if a veteran is unemployable and (2) require a vocational assessment for all TDIU applicants. Taking the committee’s advice into consideration could better position the agency to meet federal internal control standards. In its comments to the committee, VA noted that before it could proceed with the vocational assessment requirement, it needed to complete a study on whether it was possible to disallow TDIU benefits for veterans whose assessment indicated they would be employable after rehabilitation. In light of VA’s agreement with the committee’s recommendations, we subsequently recommended in our June 2015 report that VBA develop a plan to study (1) whether age should be considered when deciding if veterans are unemployable and (2) whether it is possible to disallow TDIU benefits for veterans whose vocational assessment indicated they would be employable after rehabilitation.recommendation and stated that Compensation Service initiated a review of TDIU policies and procedures in April 2015 including consideration of age and vocational assessments in claim decisions. VBA expects to complete an action plan to initiate any studies, legislative proposals, or proposed regulations deemed necessary, by July 2015. In conclusion, the benefits veterans are entitled to, as well as VA’s decisions on what constitutes a work disability, are in need of constant refinement to keep pace with changes in medicine, technology, and the modern work environment. Within this broad context, VA can position itself to better manage the TDIU benefit and look for opportunities to strengthen the assessments of its eligibility decisions. Having a strong framework for program integrity is important for any federal program, and in light of the multi-billion dollar—and growing—TDIU benefit, taking steps to ensure payments are properly awarded to veterans is essential. Moreover, VA has the opportunity to benefit from the attention the TDIU benefit has received by various experts, including its own advisory committee. The options and potential strengths and challenges identified by experts and VSO representatives may warrant consideration in any broader benefit refinement discussions and efforts to improve the TDIU benefit design and eligibility criteria going forward. VA generally agreed with our conclusions in our June 2015 report and concurred with all of our recommendations and made plans to address them. Chairman Miller, Ranking Member Brown, and Members of the Committee, this concludes my prepared remarks. I would be happy to answer any questions that you or other members of the committee may have. For further information regarding this testimony, please contact Daniel Bertoni at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Key contributors to this testimony include Brett Fallavollita (Assistant Director), Melissa Jaynes, Kurt Burgeson, David Chrisinger, Alexander Galuten, and Kirsten Lauber. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The population of veterans who receive these supplemental benefits has been growing. GAO was asked to testify on its recent review of VA’s management of these benefits. GAO issued a report in June 2015 that discussed the results of its review. Like the June 2015 report, this statement (1) examined age-related trends in the population of TDIU beneficiaries and benefit payments; (2) assessed the procedures used for benefit decision-making; and (3) described suggested options for revising the benefit. The number of veterans receiving Total Disability Individual Unemployability (TDIU) benefits has been increasing, as has the total amount of benefit payments, especially among older veterans. VA generally provides TDIU benefits to disabled veterans who are unable to maintain employment with earnings above the federal poverty guidelines due to service-connected disabilities. To be eligible for TDIU benefits, a veteran must have a single service-connected disability rated at least 60 percent or multiple disabilities with a combined rating of at least 70 percent (with at least one disability rated at 40 percent or higher). In addition, the veteran must be unable to obtain or maintain “substantially gainful employment” as a result of these service-connected disabilities. In fiscal year 2013, over 330,000 veterans received this benefit, a 22 percent increase from fiscal year 2009, while the TDIU disability payments increased by 30 percent. GAO estimated that $5.2 billion was spent in fiscal year 2013 for the supplement. These trends occurred alongside GAO also found that VA’s procedures do not ensure that TDIU benefit decisions are well supported. Specifically, (1) VBA’s guidance for determining unemployability, and thus benefit eligibility, is incomplete and formatted and delivered inefficiently; (2) VBA’s quality assurance approach may not comprehensively assess TDIU benefit decisions; and (3) self-reported income eligibility information is not verified with third-party earnings data. GAO also identified seven options proposed by experts for revising TDIU eligibly requirements and the benefit structure. Six options focus on eligibility requirements, such as considering additional criteria when determining unemployability and applying an age cap of 65. A seventh option would affect the benefit structure by lowering—but not immediately eliminating—the TDIU benefit payments as beneficiaries earn income beyond the eligibility limit. In its June 2015 report, GAO recommended that VA issued updated guidance to determine eligibility; identify a comprehensive quality assurance approach to assess benefit decisions; verify veterans’ self-reported income; and move forward on studies suggested by its advisory committee. VA concurred with all of GAO’s recommendations. |
Five corridor rail lines currently exceed Amtrak’s predominant top speed of 79 miles per hour in the United States. Proposals for high speed rail projects in 44 other specific corridors are at some stage of planning and development. Eleven of these projects have advanced into the environmental review phase (see table 1). Financing for the proposed projects has yet to be arranged, with the partial exception of the proposed Los Angeles, California, to San Francisco, California, system, for which voters recently approved $9.95 billion in bond funding. For those projects that currently operate above 79 miles per hour, financing came from federal or state sources. Federal funding for high speed rail has generally gone to improvements to rail service in the Northeast Corridor between Washington, D.C., and Boston, Massachusetts, and to research and development. Some $3.1 billion has been spent by the federal government on the Northeast Corridor since 1990—about 75 percent of all federal funding identified by FRA as having been spent for high speed rail over this period. The remaining 25 percent has primarily gone to research and development purposes related to high speed rail. For example, the first foray into high speed rail development was in 1965, when Congress provided funding to begin studying high speed rail technologies. Later, the Magnetic Levitation Deployment Program provided funds to begin studying maglev as a new high speed transportation technology and to advance a demonstration project in the United States. States have also invested in high speed rail in some instances. For example, state funding was used to help achieve speeds above 79 miles per hour between New York, New York, and Albany, New York; Los Angeles, California, and San Diego, California; Chicago, Illinois, and Detroit, Michigan; and Philadelphia, Pennsylvania, and Harrisburg, Pennsylvania. Several federal agencies have played a role in the planning and development of high speed rail projects to date, and others may potentially be involved as projects progress. FRA has generally been the lead federal agency—sharing that role with other federal agencies, such as the Surface Transportation Board—regarding the environmental review process. The Surface Transportation Board must give its approval before any new rail lines can be constructed that connect to the interstate rail network. FRA also designates corridors as “high speed rail” corridors, and is the agency responsible for any safety regulations or standards regarding high speed rail operations. Safety standards relative to tracks and signaling requirements become more stringent as train speeds increase. For example, at speeds of 125 miles per hour or higher, highway-rail grade crossings must be eliminated, and trains must be equipped with positive train control, which will automatically stop a train if the locomotive engineer fails to respond to a signal. To operate at speeds above 150 miles per hour, FRA requires dedicated track—that is, track that can only be used for high speed rail service. No safety regulations currently exist for speeds above 200 miles per hour. In addition to FRA and the Surface Transportation Board, the Federal Highway Administration and the Federal Transit Administration (FTA) may play a role if highway or other transit right-of-way will be used or if highway or transit funds are to be used for some part of a high speed rail project. The Bureau of Land Management is responsible for granting rights-of-way on public lands for transportation purposes and, thus, would be involved in any new high speed rail project that envisions using public lands. Various other agencies would be involved in the environmental approval process, including the U.S. Fish and Wildlife Service and the Environmental Protection Agency, among others. Based on our interviews with both domestic project sponsors and foreign operators of high speed rail lines, in addition to a literature review, we identified many common characteristics that tend to lead to relatively high numbers of riders and resulting public benefits and to relatively lower costs. High speed rail tends to attract the most riders and resulting public benefits in corridors between roughly 100 and 500 miles with existing high demand for intercity travel. Service characteristics relative to other travel alternatives—such as travel time and price competitiveness, high frequency, greater reliability, and safety—are also critical in attracting riders and producing public benefits. Costs of high speed rail tend to be lower in corridors where right-of-way exists that can be used for high speed rail purposes, and a relatively flat- and straight-alignment can be used. While several U.S. corridors exhibit characteristics that suggest potential economic viability, decision makers have faced difficulties in ascertaining whether any specific proposed line will be viable due to uncertainties in how accurately project sponsors forecast riders and estimate costs, and to the lack of agreement and standards regarding how a project’s public benefits should be valued and assessed. High levels of demand for intercity travel are needed to justify a new high speed rail line. (See app. V for a discussion of techniques for forecasting demand for intercity travel and riders on high speed rail.) Project sponsors identified high levels of population and expected population growth along a corridor, and strong business and cultural ties between cities as factors that can lead to higher demand for intercity travel. In some corridors, riders are expected to come from business travelers and commuters due to the strong economic ties between cities along the corridor; while in other corridors, a larger number of tourists and leisure travelers comprise the expected riders. Officials in Japan expressed the importance of connecting several high-population areas along a corridor as a key factor in the high number of riders on their system, to effectively serve several travel markets, including commuters and travelers from cities along the corridor. The corridor between Tokyo and Osaka in Japan is unique in that it is one of the most populous regions in the world, with multiple urban areas of several million inhabitants located along the corridor. This corridor attracts the highest number of riders of any high speed rail line in the world—over 150 million riders annually. In other foreign corridors we examined, however, population and densities were not as high, but foreign officials indicated that high speed rail revenues in these areas were sufficient to cover ongoing operating costs, although not necessarily sufficient to recoup the initial investment in the line. Some, but not all of the corridors under development in the United States today have population levels similar to corridors in the foreign countries we examined (see figs. 1 and 2). Poption (in million) Poption (grphic repreenttion) City nme Poption (in million) Poption (grphic repreenttion) Several proposals exist in this corridor, with varying attributes and estimated travel times. See appendix VII of this report for more information on the various proposals. High speed rail also has more potential to attract riders in corridors experiencing heavy travel on existing modes of transportation (i.e., conventional rail, air, and highways—including automobile and bus) and where there is, or is projected to be, congestion and constraints on the capacity of existing transportation systems. These situations lead to demand for an additional transportation alternative, or demand for expansion or improvements to existing transport modes. To attract riders from existing transportation alternatives, a proposed high speed rail line needs to be time- and price-competitive with the alternatives, and also needs to have favorable service characteristics related to frequency, reliability, and safety. FRA and others have found that high speed rail tends to be most time-competitive at distances of up to 500 miles in length. Existing high speed rail lines in Japan tend to be most time- competitive and attain the highest relative levels of service in corrido roughly similar distances (see fig. 3). According to foreign and domestic officials with whom we spoke, generally lines significantly shorter than 100 miles do not compete well with the travel time and convenience of automobile travel, and lines longer than 500 miles are unable to overcom e the speed advantage of air travel. Between 100 and 500 miles, high speed rail can often overcome air travel’s speed advantage because of reductions in access and waiting times. Air travel requires time to get to the airport, which can often be located a significant distance from a city center, as well as time related to checking baggage, getting through security, waiti at the terminal, queuing for takeoff, and waiting for baggage upon arrival at a destination. By contrast, high speed rail service is usually designed to go from city center to city center, which generally allows for reduced access times for most travelers. Some travelers will have destinations starting points outside of city centers in closer proximity to airports, thus potentially minimizing or eliminating in some cases the access time advantage of high speed rail where high speed rail service does not , connect to airports or other locations preferred by travelers. High sp rail also generally has less security and waiting time than airports. On th foreign high speed rail lines we observed, there was no formal security comparable to airport security, and travelers could arrive at a station jus few minutes prior to departure. In France, Japan, Spain, and elsewhere, high speed rail has been shown to be time-competitive with air travel and has relieved capacity constraints at airports. For example, high speed rail in Japan has resulted in eliminating one air route (Tokyo-Nagoya), while several others have lost significant market share to high speed rail. With the introduction of the Madrid- Barcelona high speed rail line in February 2008, air travel between th cities has dropped an estimated 30 percent (from 5.0 million to 3.5 million air passengers), while high speed rail riders increased markedly. In France, high speed rail has captured 90 percent of the Paris-Lyon air market, and Air France officials estimated that for high speed rail trips of between 2 and 3 hours, high speed rail is likely to capture about 80 percen of the air-rail market over time. By displacing shorter distance air travel, high speed rail has freed up considerable airport capacity in those cities for other longer distance flights. However, because high speed rail becomes a new competitor with short-distance air travel, airlines ha ve in some cases actively opposed its development. In the United States, most of the 16 high speed rail projects we focused on will connect metropolitan areas with anticipated capacity constraints at nearby airports (see fig. 4). While several U.S. corridors exhibit characteristics that suggest potential economic viability, determining whether any specific proposed line will be viable has proven to be difficult for decision makers. This difficulty is due to uncertainties with the forecasts of riders and cost estimates that project sponsors produce, the lack of agreement and standards regarding how a project’s public benefits should be valued and quantified, and the lack of comparison with alternative investments in highway or air infrastructure. Rider forecasts and cost estimates are inherently uncertain and subject to some degree of inaccuracy simply because they are trying to predict future circumstances. However, analyses and research on the accuracy of rider forecasts and cost estimates for rail infrastructure projects have found that a systematic problem and incentive to be optimistic may exist—that is, actual riders are more likely to be lower than forecasted, while actual costs are more likely to be higher than estimated. For example, a study of over 250 transportation infrastructure projects in Europe, North America, and elsewhere, found that rail projects—while not all high speed—had the highest cost escalation out of all the transportation modes studied— averaging 45 percent higher than estimated. Another study that included 27 rail projects, 1 of which was a high speed rail project, from around the world found that rider forecasts for over 90 percent of the rail projects studied were overestimated, and 67 percent were overestimated by more than two-thirds. Numerous techniques are available in travel demand modeling (a common tool for forecasting riders) and, thus, different models for the same proposed project could have diverse results. A modeler usually makes choices on the theory and assumptions upon which the model is based, the mathematical form of the model, and the variables to be included. For example, a modeler may design a survey to determine how travelers would react to a new transportation mode, but there is a risk that the design or implementation of that survey could lead to biased survey results. Survey instruments can be scrutinized by third parties, but the process of data collection is less accessible to outside observers, especially after the fact. Furthermore, decisions on how to handle data within a model may enable the analyst to steer the result in a preferred direction. For an external, disinterested reviewer, the evolution of such decisions is very difficult to trace. (See app. VI for more details on travel demand forecasting and modeling.) While most project sponsors in the United States cited a variety of public “external” benefits, such as congestion relief or environmental benefits that would flow from their projects, the extent to which benefits had been quantified and valued varied across projects. For the 16 domestic projects that we reviewed, formal benefit-cost analyses have been conducted for 4 of them—although many proposed projects have not advanced to the stage of conducting in-depth analyses. Of these analyses, none have formally compared the proposed project with alternative modal investments, such as airport or highway expansion, although the proposed high speed rail line between Los Angeles, California, and San Francisco, California, has created a rough comparison of high speed rail investment with stated investment needs on the highway and air modes. Even if a formal benefit-cost analysis has not been done, public benefits of some domestic projects are considered in some ways within the context of the National Environmental Policy Act (NEPA) process. Under NEPA, the weighing of the merits and drawbacks of the various alternatives need not be displayed in a monetary benefit-cost analysis, but an environmental impact statement should at least indicate factors not related to environmental quality, which are likely to be relevant and important to a decision. Project sponsors with whom we spoke—domestically and internationally—cited several types of public benefits that were significant in determining the economic viability of proposed high speed rail lines, including: Travel time savings: Travelers using alternative modes may experience travel time savings as a result of reduced highway traffic and airport use by travelers shifting to high speed rail. Environmental benefits: Environmental benefits could result from reducing pollution and carbon dioxide emissions, to the extent that the rail service reduces congestion on highways or at airports and makes use of fuel-efficient technology (i.e., high speed rail service using diesel locomotives would provide less environmental benefit than service that is electrified, all else being equal). Traffic safety: Benefits from increased traffic safety include reduction in traffic accidents, to the extent that the rail service reduces congestion on highways. Economic development, land use, and employment: A high speed rail system that encourages relocation of households and firms, and in cities where passenger rail stations are located, could experience growth of population and business presence—increasing retail sales, rental income, and property values. Government officials in the countries we studied told us that a national policy decision had been made that the public benefits flowing from high speed rail are sufficient to justify some amount of public subsidy in high speed rail systems. In other words, passenger fare revenues are not necessarily expected to cover the full cost of constructing, operating, and maintaining the system. For example, in Japan, government officials told us that the construction of a new high speed rail line will be built only if certain criteria are met, including stable public subsidies, profitability of the operator, and a positive benefit-cost ratio. In Spain, one of the goals of high speed rail is to increase social and territorial cohesion. French officials said subsidies depend on the line—core lines like Paris-Lyon can cover construction costs from passenger fares. Quantifying public benefits can be difficult, however, and the level at which to value some benefits can be subject to disagreement. Furthermore, there are currently multiple federal guidelines in the United States for valuing public benefits, yet none have been designated for use in analyzing proposed high speed rail projects. For example, high speed rail service that reduces congestion on highways or at airports and makes use of fuel-efficient technology may provide an environmental benefit (i.e., reduced pollution and greenhouse gas emissions). However, the value to assign to the reduction of pollution and greenhouse gas reductions is difficult to determine, since there is no current market for pollution reduction in the United States. Thus, the valuation of pollution reduction—defined as the public’s willingness to pay—is generally left to economists to estimate by indirect methods. The valuation of greenhouse gas reductions entails additional considerations that are based on uncertain future benefits. Other intangible benefits, such as economic development impacts, are also difficult to estimate and are subject to disagreement. Officials in Japan told us that, although they previously calculated regional economic development benefits and included them in high speed rail decision making, they abandoned the practice because it was too difficult to isolate the impacts and because they believe that benefits accrued through revenues and passenger benefits alone are sufficient to meet their criteria for constructing new high speed rail lines. Moreover, while benefits such as improvements in economic development and employment may represent real benefits for the jurisdictions in which a new high speed rail service is located, from another jurisdiction’s perspective or from a national view they may represent a transfer or relocation of benefits. Once domestic projects are deemed to be economically viable, efforts to develop those projects will continue to encounter significant challenges in financing the high up-front construction and other costs. In addition, sustaining public and political support for project development will also be a challenge. Uncertainties regarding rider forecasts and cost estimates can undermine confidence in whether projects will actually produce claimed benefits. Project sponsors must also sustain political support over several electoral cycles and coordinate project decisions among numerous stakeholders in different jurisdictions, typically without the benefit of an established institutional framework. Once economic viability is determined, the main challenge is securing the investment necessary to fund the substantial up-front capital costs, such as those incurred for planning and preliminary engineering, building the infrastructure, and acquiring train equipment. In addition, high speed rail projects require a very long lead time, and the lengthy development periods can increase the uncertainty over future costs and benefits, and the front-loaded nature of the required spending can increase risk. Passenger fares are generally insufficient to finance the capital and operating costs of a high speed rail system, and the public “external” benefits cannot necessarily be captured in a revenue stream based on prices. Therefore, public subsidies are generally required, at least for the initial investment. Domestic project sponsors for all of the proposed high speed rail projects we reviewed, except one, indicated that they have or will need some federal funding to develop and construct their projects. The PRIIA authorized annual funding—a total of $1.5 billion for fiscal years 2009 to 2013—for high speed rail corridor development across the entire United States. ARRA appropriated $8 billion for high speed rail and intercity passenger rail congestion and capital grants (the latter of which were authorized by the PRIIA). However, this funding will not likely be sufficient to fund large-scale projects. For example, project sponsors for the proposed high speed rail line between Los Angeles, California, and San Francisco, California, are anticipating $12 billion to $16 billion in federal funding alone, and, according to the California High Speed Rail Authority, total project costs are expected to exceed $40 billion if the entire system is constructed. Federal funding that has historically been made available for high speed rail has been derived from general revenues, rather than a dedicated funding source. Consequently, high speed rail projects must compete with other nontransportation demands on federal funds, such as national defense, education, or health care, as opposed to being compared with other alternative transportation investments or policies in a corridor. By contrast, other transportation modes are funded through federal programs—such as federal-aid highways, the FTA’s New Starts Program, and the federal Airport Improvement Program—which benefit from (1) dedicated funding sources based on receipts from user fees and taxes, (2) a format for allocating funds to states, and (3) in some cases, a structure for identifying projects to be funded. As we have previously reported, comparison of alternative investments in other transport modes, such as high speed rail, generally does not occur when decision makers are evaluating projects or applying for funding from any of these programs. Given the lack of dedicated federal grant funding currently available for high speed rail projects, project sponsors are exploring other federal financing mechanisms for high speed rail projects, such as federal loan programs. Available federal loan programs, however, may be limited in their ability to help fund the substantial cost of high speed rail projects or the number of projects competing for federal loans. Two project sponsors told us that they plan to apply (and one project sponsor indicated it did not plan to apply, but elements of its project would be eligible) for credit under the TIFIA program, which offers credit assistance to surface transportation projects. According to TIFIA documents, the $122 million authorized by Congress annually for the program provides over $2 billion in credit assistance. Sponsors of high speed rail projects could request that amount or more for one loan, thereby constraining TIFIA’s ability to fund other projects in the same year, as we noted when analyzing the Florida Overland Express (FOX) project in 1999. There may be other challenges as well. For example, because TIFIA assistance cannot exceed 33 percent of a project’s construction costs, project sponsors must secure other sources of funding to construct a project, which has proven difficult. In addition, the availability of TIFIA funds, or other federal funding, may be questionable since the federal government faces significant future fiscal challenges, as we have noted in recent reports. Finally, as Amtrak officials suggested, the TIFIA program’s requirement that loans and loan guarantees be repaid may be another limitation on the program’s usefulness in funding high speed rail projects. In the countries we visited, the central government generally funds the majority of up-front costs of their country’s respective high speed rail projects, and they do so without the expectation that their investment will be recouped through ticket revenues. The public sector’s ability to recover its financial investment has varied on the basis of how revenues have grown, but transportation officials in Japan and Spain told us that a public subsidy was generally necessary because ticket revenues are insufficient to fully recoup the initial investment. In Japan, while two early lines developed in the 1960s and 1970s may have fully repaid the initial investment and debt related to their construction, three of the high speed rail lines built since the 1987 privatization have been able to recover 10 percent, 52 percent, and 63 percent of their construction costs through ticket revenues. Spanish officials told us the original high speed line in Spain between Madrid and Seville has been profitable on an operating cost basis but has not covered all of its costs, including the original construction costs. A Spanish academic researcher told us that future lines might not cover even their operating costs. State funding for high speed rail can also be limited by the lack of dedicated funding sources and restrictions on the use of gasoline tax revenues. None of the project sponsors with whom we spoke obtained funding from a dedicated source of state funding for high speed rail; one project sponsor (i.e., the Virginia Department of Rail and Public Transportation), however, noted that it had a dedicated rail funding source available. Since the two high speed rail projects currently being developed in Virginia are still in the planning stages, according to the Virginia Department of Rail and Public Transportation, they have not yet sought funds from Virginia’s Rail Enhancement Fund, which provides about $25 million annually for both freight and passenger rail improvements. In addition, according to a report by the Brookings Institution, 30 states— including states where high speed rail projects are proposed, such as Minnesota, Nevada, and Pennsylvania—are restricted from spending revenues from excise taxes on gasoline, which typically is a state’s main source of transportation revenue. In lieu of a dedicated source of state funding, some project sponsors have sought funding directly through appropriations of state revenue or bond measures, which compete with numerous other state budgetary needs. New York State Department of Transportation officials said that appropriations from general state revenue and bonding measures enabled them to fund only incremental improvements along the New York, New York, to Albany, New York, corridor, not the major expansions that had been planned. The choice of a financing mechanism can have serious implications for states and local governments, which as we have previously reported, will face broader fiscal challenges over the next 10 years, because of increasing gaps between receipts and expenditures. For example, in November 2008, California voters passed a ballot initiative that would allow the state to issue $9.95 billion in bonds, $9.0 billion of which would go toward the construction of a statewide high speed rail system. According to information prepared by California, this bond issue, including principal and interest, could cost the state general fund about $19.4 billion over 30 years. Also, bonding mechanisms may cost more than using appropriations of general revenues. For example, we reported that a proposal to allow Amtrak to issue up to $12.0 billion in tax credit bonds over a 10-year period for capital improvements on designated high speed rail corridors and the Northeast Corridor would have cost the U.S. Treasury as much as $11.2 billion (in present value terms) in lost tax receipts over a 30-year period if states had financed their contribution from tax-exempt borrowing and Amtrak had used accumulated losses to offset taxable earnings in a trust fund established to repay the bond principal. This cost compared with an estimated total cost to the U.S. Treasury of between $7.3 billion and $8.2 billion (in present value terms) if annual federal appropriations of federal revenues had been used for the same purpose. Another possibility are tax-exempt private activity bonds, which can be used to finance high speed rail facilities. Such bonds were formerly restricted to high speed intercity passenger rail facilities that operate at speeds in excess of 150 miles per hour and proceeds could not be used for rolling stock (passenger rail vehicles). ARRA modified these restrictions to make eligible projects that are “capable of attaining” maximum speeds in excess of 150 miles per hour, rather than operating at such speeds. This modification may increase the number of projects that can qualify to use tax-exempt private activity bonds for high speed intercity passenger rail facilities. Both current and former domestic high speed rail project sponsors have sought private financing but found it difficult to obtain private sector buy- in, given the significant financial risks high speed rail projects pose. In February 2008, we reported that public-private partnerships can provide potential benefits, such as transferring some risk from the public to the private sector, and an increased potential for operational efficiencies. The level of private sector involvement anticipated by some domestic high speed rail projects is unprecedented, particularly given the limited private sector involvement with operating domestic high speed rail to date. For example, the California High Speed Rail Authority is looking to the private sector to provide between $6.5 billion and $7.5 billion of the total cost to finance, construct, operate, and maintain the first phase of its statewide system. Private sector firms have expressed interest in high speed rail projects, but the firms with which we spoke noted that without public sector commitment—both financial and political— private sector involvement and financing would be limited, due to the financial and ridership risks of such projects. A good illustration of the domestic relationship between the public and private sectors in high speed rail is the FOX project. The private sector’s willingness to finance a portion of that project’s construction costs was predicated on an understanding that Florida would cover costs that could not be recouped through ticket revenues. Although the state agreed to provide $70 million annually over a 40-year period to support the project, it was terminated when this support was withdrawn. (See app. IV for more detail on the FOX project.) Similarly, in California, private sector entities have expressed interest in investing in part of the high speed rail project, but noted that they would need substantial public sector commitment to the project before participating. Efforts to develop entirely privately financed high speed rail projects in the United States have proven unsuccessful to date. According to the Texas High Speed Rail Authority, the Texas TGV project, which was intended to be a privately financed project in the Texas triangle (Houston- Dallas-Fort Worth-San Antonio), was unsuccessful, primarily because one of the firms involved in the private consortium encountered financial difficulties. (See app. IV for more details on the Texas TGV project.) In Florida, an effort to pursue a privately financed high speed rail project during the 1980s also failed (before the FOX project). One current project, the Desert Xpress project, from Victorville, California, to Las Vegas, Nevada, is also seeking to develop an entirely privately financed high speed rail line, but as of February 2009, the project had not secured private financing. Public-private partnerships are one means by which foreign governments are seeking to share the financial risks of their expanding high speed rail systems. In Japan—where the rail system was privatized in 1987—the national government and local governments still assume the financial risk of constructing a new high speed rail line, investing two-thirds and one- third of the construction costs, respectively (see fig. 5). With the government’s financial commitment, the private railroad operating companies undertake the operational risk and rely on ticket revenues to cover operating and maintenance costs. The railroad operating companies’ business model, which includes various business ventures and nonrail revenue streams, also helps them assume this risk for rail lines with relatively low numbers of riders, since these additional revenues may be able to cover high speed rail operating losses, if they occur. As France and Spain look to expand their high speed rail systems, they are exploring private sector participation to, among other reasons, attract additional financing, and, in the case of France, tap private sector management and technical expertise. France is contemplating a public- private partnership contract scheme where risks associated with financing, designing, building, and maintaining a high speed rail line are allocated to the private sector (see fig. 6). Under this scheme, the private sector essentially would assume the responsibilities of the public infrastructure manager, put up the initial construction financing, take on the projects’ construction cost and schedule risks, and ensure that the infrastructure is available to a passenger rail operator for a certain percentage of time. The line must also be maintained to certain levels to ensure safety. The public sector assumes the risk associated with operating the rail service, and commits to making fixed annual payments to the private sector, as long as the infrastructure is available the prescribed percentage of time. French officials acknowledged that there is currently much uncertainty about how these arrangements will work and whether there will be sufficient private sector interest. At the time of our visit, France had not implemented a public-private partnership. However, a recent call for tenders on the Tours-Bordeaux line raised the interest of 3 French contractors. French officials expect this contract to close by the end of 2009. Spain was in the process of completing a public- private partnership for a line from Figueras to the French border. However, this arrangement was used to construct a portion of a high speed rail line in the Netherlands, and, according to an official with the private sector consortium that constructed this line, if there is a public sector commitment, the private sector can make a public-private partnership work. Additional challenges faced in developing high speed rail projects include sustaining public and political support over lengthy development timelines for high speed rail. As we have previously mentioned, high speed rail projects require long lead times. The five new right-of-way rail projects we reviewed have been in project development between 4 years and 18 years, and on average 13 years. Similarly, in France, transportation ministry officials told us that high speed rail projects in their country take about 14 to 16 years to complete. This time comprises when project planning begins to when the project opens for revenue service. A considerable amount of this time is for studies and analysis as well as public debate about the merits of a project. Sustaining public support over this length of time can be difficult and can have significant impacts on a project. As the experience with the FOX project demonstrated, development of high speed rail projects can occur over multiple electoral cycles, which not only can change the course of project development but can also lead to project termination if public and political support is not sustained. For example, as we have previously discussed, the Florida DOT had planned to provide $70 million annually to help construct the FOX project. The project began under one gubernatorial administration that supported the project. The project was terminated under a different administration that did not support the project. Several public and private sector officials we spoke with cited the need for someone or some organization to “champion” a project over a long period of time. French officials told us it is easier to sustain public support for a high speed rail project once it has the commitment of the central government. There are also challenges associated with the ability to provide transparency and confidence in project cost estimates and rider forecasts. As we have previously discussed, these estimates and forecasts can often be inaccurate, which may erode public support for high speed rail. During the FOX project, advocacy organizations, state transportation agencies, and GAO each questioned the reliability of project cost estimates and rider forecasts. The governor of Florida decided to cancel state funding for the project, in part due to the skepticism raised by these organizations. Cancellation of state funding led to termination of the project. More recently in California, a report by numerous advocacy organizations raised similar concerns about the rider forecasts and costs estimates for the statewide high speed rail project. Although the public approved nearly a $9.95 billion bond to support this project, over time public support could erode, along with public funding, if confidence in rider, revenue, and cost estimates is lost. Reaching consensus on project decisions, such as a rail line’s actual route, involves difficult negotiations, which can cause substantial project delays and disagreements among stakeholders. Given that high speed rail projects can span hundreds of miles and sometimes cross multiple states, numerous stakeholders and jurisdictions are involved. Stakeholders typically include, among others, federal, state, and local governments; the private sector; and advocacy organizations. For example, project sponsors of the Southeast High Speed Rail Corridor (a project from Washington, D.C., to Charlotte, North Carolina) noted that some 50 federal, state, and local government agencies are involved in the project as well as a 214- member advisory committee. Coordinating on project decisions with these stakeholders—each with their own priorities and views—can be difficult, particularly without an established institutional framework within which this can occur, as exists for other transportation modes. For example, in planning highway and transit projects, federal agencies, local transit agencies, metropolitan planning organizations, and state transportation departments benefit from established procedures for planning and public involvement. Development of domestic high speed rail projects may typically be led by rail divisions within state DOTs or by high speed rail authorities and commissions. These organizations are often limited in terms of institutional and financial resources. For example, in the case of the California High Speed Rail Authority, funding has fluctuated from a little over $1 million per year to a little over $14 million (see table 4) as a result of changes in its annual appropriation from the state legislature. The $3.9 million in state funding for fiscal year 2005-2006 was planned to support approximately 4 staff members in developing a $45 billion, 800-mile statewide high speed rail system. Rail divisions within state DOTs also face similar funding and manpower issues, since there is typically no dedicated state funding for rail services, as we previously discussed. In addition, rail has generally not been a primary focus of state transportation plans, which are more focused on highway projects. Commissions and authorities may face other institutional challenges related to their role and authority. For example, a Virginia official told us that legislation to create a high speed rail authority fails every year it comes up for a vote because of concerns that an authority might issue bonds and jeopardize the state’s triple A bond rating. In addition, the role of high speed rail authorities is sometimes unclear. According to the final report of the Texas High Speed Rail Authority, as well as the former director of the authority, rail authorities can sometimes be conflicted between advocating for a high speed rail project and objectively determining whether a system is in the “public convenience and necessity.” Stakeholder consensus is also a considerable challenge for projects that involve incremental improvements for high speed rail service. Nine of the 11 incremental project sponsors with whom we spoke said that working with stakeholders such as Amtrak, commuter railroads, and private freight railroads can be difficult and time-consuming since each has its own interests. Projects that cross state lines pose additional stakeholder challenges, particularly with respect to allocating benefits and costs among the states. To address multistate issues, some states have pursued interstate compacts and commissions as a means to formalize decision making. For example, the Virginia-North Carolina Interstate High Speed Rail Compact established a commission to provide project leadership and vision and to define roles. However, interstate compacts can be difficult to implement and involve working out many practical issues, including deciding on what type of service to provide, how financial contributions will be distributed, and what occurs if and when one or more states do not meet their financial or other responsibilities. In the United States, the federal government has not historically had a strong leadership role in high speed rail. The recently enacted PRIIA provides a framework for developing a federal role. ARRA will also likely affect the federal role by providing $8 billion for high speed rail. Following reexamination principles we have reported on for surface transportation programs would help ensure that the implementation of the act, and a possible heightened federal role, is efficient, effective, and focused on yielding maximum benefits for its investment. Since the 1960s, Congress has authorized various programs dealing with high speed ground transportation, including high speed rail, but no federal vision or national plan for determining the role of high speed rail in the U.S. transportation system exists. FRA officials told us that they do not have a high speed ground transportation policy, and, as one FRA official told us, policies related to high speed rail have varied from one administration to another. FRA officials also told us that creating interest in promoting high speed rail at the national level has been difficult to sustain. The recently enacted PRIIA, in addition to authorizing funding, provides numerous other opportunities for a greater federal role in high speed rail development, as follows: the act requires the Secretary of Transportation to establish and carry out a rail cooperative research program that will address, among other things, new high speed wheel on rail systems; the FRA Administrator is tasked with the development of a long-range national rail plan consistent with approved state rail plans and the rail needs of the nation; the FRA Administrator is required to support high speed rail development, including high speed rail planning; the act explicitly provides a framework for the establishment of a High Speed Rail Corridor Development Program, which permits the Secretary to make grants to states, groups of states, and others to finance capital projects in high speed rail corridors; the act requires the Secretary to issue a request for proposals for the financing, design, construction, operation, and maintenance of high speed intercity passenger rail systems operating within high speed rail corridors; and the Secretary is to study high speed rail routes and establish a process for states or groups of states to redesignate or modify designated high speed rail corridors. High speed rail projects will largely continue to be initiated at the state- level, but the federal government can be expected to play an increased role in funding and assisting in the development of high speed rail corridors and projects. A number of principles could help guide the potential federal role in high speed rail, particularly as the newly enacted PRIIA and ARRA are implemented. These principles will increase the likelihood that the federal role in high speed rail is efficient, effective, sustainable, and focused on maximizing public benefits. We have discussed such principles in our work calling for a reexamination of federal surface transportation programs. As applied here, the principles would address, going forward, the federal interest in developing a high speed intercity passenger rail policy, based on high speed rail purpose and relevance, its effectiveness in achieving goals and outcomes, its efficiency and targeting, its affordability, and its sustainability. These principles are as follows: Create well-defined goals based on identified areas of national interest. This would include establishing the expected outcomes related to each goal, and the federal role in achieving each goal. Incorporate performance and accountability for results into funding decisions. Employ the best analytical tools and approaches to emphasize return on investment. Ensure fiscal sustainability. This would include consideration of such things as whether funding is affordable and stable over the short- and long- term; the extent to which costs and revenues are shared among federal, state, and local participants; and whether any project fees and taxes are aligned with use and benefits. Given the current fiscal crisis facing the nation and the pressing needs facing the federal government in many areas, it is critical that federal dollars are used efficiently and effectively and are focused where they can produce the greatest benefits. Failure to apply these principles could lead to an unfocused federal investment in high speed rail corridors or projects and, as a consequence, little impact on the congestion, environmental, energy, and other issues that face the U.S. transportation system. We have previously reported that specific, measurable, achievable, and outcome-based goals that are in turn based on identified areas of federal interest, improve the foundation for allocating federal resources and optimizing the results from the investment. Determining the federal interest involves examining the relevance and relative priority of programs, including high speed rail, in light of 21st century challenges and identifying areas of emerging national importance, such as congestion, dependence on foreign fuel sources, and the impacts of transportation on climate change. With the federal interest clearly defined, policymakers can clarify the goals for federal involvement (i.e., specific goals could be set on the basis of the expected outcomes), and can clearly define the roles of federal, state, and local government in working toward each goal. Where the federal interest is greatest, the federal government may play a direct role in setting priorities and allocating resources, as well as fund a higher share of program costs. Conversely, where the federal interest is less evident, state and local governments could assume more responsibility. To date, there has been little consideration at a national policy level of how high speed rail could or should fit into the national transportation system and what high speed rail development goals should be. In the 1990s FRA studied the commercial feasibility of high speed rail and focused on the economics of bringing high speed ground transportation (including high speed rail) to well-populated groups of cities in the United States. Its report identified potential opportunities where high speed rail could complement highway or air travel. One purpose of the study was to lay the groundwork for high speed rail policy in the United States. However, according to FRA, this policy was never developed. The PRIIA requires the FRA Administrator to prepare a long-range national rail plan; preparing that plan will provide an opportunity for the federal government to identify the vision and goals of high speed rail for the nation and identify how, if at all, high speed rail fits into the national transportation system. Although the act does explicitly require that high speed rail be included in the national rail plan, the national rail plan must be consistent with state rail plans and, among other things, state rail plans are to include a review of all rail lines in a state, including proposed high speed rail lines. National vision and goals, influenced by an intermodal perspective, have been key components in the development of high speed rail systems and national rail plans in both Europe and Asia. For example, in Europe, the vision and goals laid out by the central governments have evolved from being focused on reviving an industry (the railroads) and addressing transportation capacity constraints, to being focused on increasing the role of rail in an intermodal transportation system, making rail a preferred transport mode in short-distance intercity corridors, and using rail to achieve broader environmental, energy, and economic development goals. In Japan, after the initial success of the first high speed rail line between Tokyo and Osaka, the central government developed a national rail master plan that laid out the vision and goals for how the system would develop (including making passenger rail competitive with air travel), where it would extend, and the benefits that were to be expected. That master plan has guided high speed rail development ever since. The development of a vision for high speed rail in the United States may need to be coordinated with reexamination of other federal surface transportation programs. As we reported, in March 2008, one reason that existing federal transportation programs are not effective in addressing key challenges, such as increasing highway and airport congestion and freight transportation demand, is because federal roles and goals are not clear. In addition, we reported that many programs lack links to needs or performance, the programs lack the best analytical tools and approaches, and there is modal stovepiping at DOT. Project sponsors, states, and others with whom we spoke are looking for federal leadership and funding in creating a structure for high speed rail development and in identifying how to achieve the potential benefits that these projects may offer. All but 1 of the 11 high speed rail proposals we reviewed have a projected need for federal funds in addition to any state, local, or other funding they may receive. Aside from funding, project sponsors and others are also looking for a stronger federal policy and programmatic role. For example, officials from 15 of the 16 projects we reviewed told us that the federal role should be to set the vision or direction for high speed rail in the United States. An official with the Florida DOT told us that no high speed rail system would be built in Florida or elsewhere in the United States absent a true federal high speed rail program. Private sector officials also told us of the importance of a federal role and vision for high speed rail, and that leadership is needed from the federal government in providing governance structures for high speed rail projects that help to overcome the institutional challenges previously described in this report. Other stakeholders similarly mentioned the need for a federal role in promoting interagency and interstate cooperation, and identified other potential federal roles, such as setting safety standards, promoting intermodal models of transportation, and assisting with right-of-way acquisition. As we reported in July 2008, our work has shown that an increased focus on performance and accountability for results could help the federal government target resources to programs that best achieve intended outcomes and national transportation priorities. Tracking specific outcomes that are clearly linked to program goals can provide a strong foundation for holding potential grant recipients responsible for achieving federal objectives and measuring overall program performance. Accountability mechanisms can be incorporated into grants in a variety of ways. For example, as we reported in March 2008, grant guidelines can establish uniform outcome measures for evaluating grantees’ performance toward specific goals, and grant agreements can depend in part on the grantees’ performance instead of set formulas. Incentive grants or penalty provisions in transportation grants can also create clear links between performance and funding and help hold grantees accountable for achieving desired results. The PRIIA establishes criteria for the selection of high speed rail corridors and high speed rail projects for development. The criteria include a determination that the proposals are likely to result in a positive impact on the nation’s transportation system. The Secretary of Transportation will select proposals that provide substantial benefits to the public and the national transportation system, is cost-effective, offers significant advantages over existing services, and meets other relevant factors determined by the Secretary. The PRIIA also requires that the FRA Administrator develop a schedule for achieving specific, measurable performance goals related to such things as the development of a long- range national rail plan, and, beginning in fiscal year 2010, to submit to the relevant congressional committees the administration’s performance goals, schedule, and a progress assessment. FRA has not yet determined how performance and accountability will be incorporated into the review and evaluation of grant applications under the PRIIA. The extent to which other countries we visited used performance and accountability measures in their high speed rail systems was limited. In France, postproject evaluations of the performance of major transport infrastructure projects have been required since 1982. However, a French government official told us that most of the current French high speed rail network was built before this 1982 postproject evaluation requirement began to be enforced. Consequently, few postproject evaluations have been done, even though this official said some evaluations had been done. Government officials in Spain said that economic evaluations of high speed lines had been conducted but, in some cases, did not determine the government’s choice of lines to develop. Rather, the government chose to develop lines that would create a high speed network that extends the benefits of high speed rail to the whole national territory. Territorial criteria have played an important role in the Spanish government’s decision to prioritize high speed rail. In Japan, historical postproject evaluations have generally not been done. A comparison of actual and forecasted ridership has been done for recent high speed rail lines and the estimates have been within 90 percent accuracy. The performance of high speed rail lines has focused on the accuracy of ridership forecasts, and these estimates are an integral part of negotiations between the government and private operators for construction of new high speed rail lines. For example, construction of new lines is carried out by the government, but a private operator assumes control over the line and assumes all the operating and maintenance responsibilities and ridership risk. Under the Japanese rail structure, the private company has an incentive—the profit motive—to ensure that the line performs well. We discuss Japan’s incentive structure further in the next section on analytical tools. The effectiveness of any overall federal program design can be increased by promoting and facilitating the use of the best analytical tools and approaches. We have reported on a number of analytical tools and approaches that may be used. These include using quantitative analyses based on identifying benefits and costs, managing existing transportation capacity, and developing public-private partnerships. Benefit-cost analysis, in particular, is a useful analytical tool for evaluating projects and ensuring goals are met. Benefit-cost analysis gives transportation decision makers a way to identify projects with the greatest net benefits and compare alternatives for individual projects. By translating benefits and costs into quantitative comparisons to the maximum extent possible, these analyses provide a concrete way to link transportation investments to program goals. The PRIIA specifies various criteria for which high speed rail grant proposals will be evaluated to determine federal investment. Specifically, project selection is partially dependent on the consideration of the project’s anticipated favorable impact on air or highway traffic congestion, capacity, and safety. Project selection criteria encourage a project sponsor to evaluate public benefits. For example, greater consideration is to be given to proposed projects that, among other things, provide environmental benefits and positive economic and employment impacts. The rail cooperative research program established by the PRIIA will also, among other things, include research into developing more accurate models for evaluating the impact of rail passenger and freight service, including the effects on highway and airport and airway congestion, environmental quality, and energy consumption. Although the PRIIA does not provide explicit guidance for quantifying or valuing the economic and other impacts specified in the project selection criteria, a more established approach to analyzing proposed projects and quantifying and valuing nonfinancial benefits may emerge, given the potential results of the rail cooperative research program and since future proposed rail projects may be evaluated within the context of state transportation systems and will need to meet specific criteria contained in the PRIIA to obtain federal funding. In our view, any approach developed, to the extent practicable, should conform to Executive Order 12893. This order directs federal executive departments and agencies with infrastructure responsibilities to develop and implement infrastructure investment and management plans consistent with the principles in the order. A key principle is that infrastructure investments are to be based on a systematic analysis of expected benefits and costs, including both quantitative and qualitative measures reflecting values that are not readily quantified. The order also directs that agencies encourage state and local recipients of federal grants to implement planning and information management systems that support the principles articulated in the order. In creating a more consistent approach, proposed projects may be more easily compared with one another, ensuring that public funding is applied to the projects and corridors with the greatest potential benefits. Similarly, the PRIIA requires that consideration be given to projects with positive economic and employment impacts, but again does not provide explicit guidance on determining what is or is not a positive economic or employment impact. As we have previously discussed in this report, economic impacts are difficult to isolate, therefore, economic development locally may not constitute a national net benefit—rather it could be a redistribution of resources. For example, development of a high speed rail system could increase economic development in the area where it is built. However, this increased economic development could be a redistribution of resources rather than a net benefit. Consequently, it will be important in implementation of the PRIIA for guidelines to be developed on how to consider national economic and employment benefits in relation to local benefits. FRA is currently in the process of evaluating the PRIIA and preparing final rules for how high speed rail projects will be reviewed and selected for federal funding under provisions of the act. The final rules are required to be issued in October 2009. Forecasts of riders and costs are two key components of evaluating the economic viability of high speed rail projects, and rider forecasts are the anchor for the array of public benefits that a new line might bring. However, as we have discussed, these forecasts are often optimistic, calling into possible question the credibility of information being used by decision makers to pursue high speed rail. Development of stronger policies, procedures, and tools could enhance the accuracy and credibility of the forecasts and contribute to better decision making. There are a variety of means that have been discussed in the transportation literature and could potentially be employed to strengthen the accuracy of forecasting. These means include the following: obligating state and local governments to share some of the risks of underestimated costs for those projects seeking federal financial support; obtaining forecasts and estimates from independent sources, such as a state auditor or a federal agency, rather than sources contracted to construct projects for a high speed rail project sponsor; subjecting forecasts to peer review with possible public disclosure of all relevant data and public hearings; and conducting horizontal comparisons of projects—that is, using data from different projects reported using a standardized accounting system to prepare probability distributions of the accuracy of project estimates of cost and demand—to evaluate new high speed rail projects. Another potential means to improving the accuracy of these estimates is to align the incentives of public and private interests. For example, in Japan, for a new line to be built, the private operator must be able to make a reasonable profit over and above operating costs, maintenance costs, and lease payments made to the government for use of the track. The private operator then has an incentive to maximize riders, but also to minimize the lease payments, to increase its profit potential. Therefore, the private operator wants to be conservative regarding rider forecasting and wants the government to build the infrastructure in order to allow for the lowest cost operation and maintenance. The central government has an incentive to keep costs low in constructing the line and to extract the highest lease payment it can negotiate from the private operator. The private rail operator and the central government negotiate and agree upon a lease payment, which remains set over a 30-year period. These negotiations are based on forecasts of riders over the ensuing 30 years and the existing cost estimates. According to officials and academics in Japan, this structure has resulted in a discipline that has vastly improved the accuracy of rider forecasting and cost estimation. For one newly constructed line, actual riders were within 90 percent of forecasted riders, and the construction of the line was within budget and ontime. In Europe, we found that the use of analytical tools and approaches for analyzing the public benefits of high speed rail projects was generally a requirement, and that these analytical tools led to public benefits being more systematically quantified and valued compared with projects in the United States. As we previously discussed in this report, evaluation of benefits can often be difficult and give rise to disagreements, and few standards exist in the United States to govern such analyses. A French official said evaluations of public benefits and costs began in the 1980s as the result of a 1982 law. France’s current approach to analyzing proposed projects includes analysis of public benefits—including travel-time savings, security, noise, and pollution—in conjunction with financial benefits to calculate financial and socioeconomic indicators (such as financial internal rate of return and socioeconomic rate of return). These financial and socioeconomic indicators are generally used to compare proposed projects that meet certain minimum thresholds and to prioritize them for construction. France’s 2004 Ministerial Order for analyzing proposed transportation infrastructure projects provides guidance to project sponsors in quantifying and valuing these benefits, and sets forth monetized values for specific public benefits and costs. In addition, France plans to soon build a multicriteria analysis tool that will take into account additional nonfinancial benefits and costs, such as building in greenhouse gas emissions reductions, as a means to advance sustainable development objectives. This tool will guide France in adopting a new national infrastructure planning scheme. Spain began explicitly including public benefits and costs in proposed project analyses in 2003. Specific benefits of rail projects are also outlined in a European Commission guide for investment projects, and include time savings, additional capacity, and wider economic benefits such as economic development. Our work has shown that transportation funding faces an imbalance of revenues and expenditures and other threats to its long-term sustainability. We have reported that a sustainable surface transportation program will require targeted investment, with adequate return on investment, from not only the federal funds invested but also investments from state and local governments and the private sector. In the context of high speed rail, fiscal sustainability includes consideration of such things as whether federal, state, and other funding is affordable and stable over the short- and long-term (i.e., both while a project is being planned and constructed as well as after the high speed rail line is in operation); the extent to which costs and revenues are shared among federal, state, local, and private participants; and whether any project fees and taxes are aligned with use and benefits. Moreover, sustainability can refer to the extent to which ticket revenues will cover ongoing operating and maintenance costs to avoid ongoing public subsidy. The PRIIA includes recognition of the potential fiscal sustainability high speed rail projects that might be selected for development. For example, the PRIIA requires the federal government to give greater consideration to high speed rail corridor projects that incorporate, among other things, equitable financial participation in the project’s financing, including financial contributions by intercity passenger, freight, and commuter railroads commensurate with the benefits expected to their operations as well as financial commitments from host railroads, nonfederal entities, and nongovernment entities. Similarly, proposals under the PRIIA for specific high speed rail projects are required to contain a description of the projected revenues and sources of revenue, including the expected levels of both public contributions and private investment. The level of public and private contributions, in addition to a summary of the potential risks to the public, including risks associated with project financing, must be considered in project selection by commissions set up by the Secretary to review the proposals. The National Surface Transportation Policy and Revenue Study Commission, created to study the condition and needs of the nation’s surface transportation infrastructure, called for an increase in intercity passenger rail service, including high speed rail service, and also proposed a system of fiscal sustainability in its final report in January 2008. The commission’s final report suggested that funding should come from a variety of sources, and that a fund should be set up for rail investment that would collect money from a new federal ticket tax levied on users of the system. Currently, users of intercity passenger rail in the United States do not pay ticket taxes or user fees similar to those paid by users of the aviation system or fuel taxes used to support the highway system. In other countries, high speed rail systems appear to be fiscally sustainable on an ongoing financial basis. For example, new high speed rail lines are not constructed in Japan unless they can cover their operating and maintenance costs, not including the payback of the initial investment in the infrastructure. Similarly, European officials told us that some of their high speed rail lines require little, if any, public operating subsidies outside of initial capital costs, since revenue is sufficient to cover operating costs. High speed rail does not offer a quick or simple solution to relieving congestion on our nation’s highways and airways. High speed rail projects are costly, risky, take years to develop and build, and require substantial up-front public investment as well as potentially long-term operating subsidies. Yet the potential benefits of high speed rail—both to riders and nonriders—are many. Whether any of the nearly 50 current domestic high speed rail proposals (or any future domestic high speed rail proposal), may eventually be built will hinge on addressing the funding, public support, and other challenges facing these projects. Determining which, if any, proposed high speed rail projects should be built will require decision makers to be better able to determine a project’s economic viability. It is not likely high speed rail projects will come to fruition without federal assistance. The PRIIA establishes a good framework for helping craft a federal role in high speed rail (which, to date, has been limited) to address these challenges. Given the complexity, high cost, and long development time for high speed rail projects, it will be critical to first determine how high speed rail fits into the national transportation system and establish a strategic vision and goals for such systems. This will establish the baseline for federal involvement. To maximize returns on federal investments, it will also be critical when reviewing grant applications under the PRIIA high speed rail provisions to clearly identify expected outcomes and to incorporate performance and accountability measures to ensure these outcomes are achieved. The failure to incorporate such measures is a common drawback of federal transportation programs. Finally, it will be incumbent upon the federal government to develop the guidelines, methods, and analytical tools to develop credible and reliable ridership, cost, and public benefit forecasts. Without such guidelines, methods, and tools, reliable determinations of economic viability will continue to be the exception rather than the norm, and the efficiency and effectiveness of any federal assistance to high speed rail could be jeopardized. To ensure effective implementation of provisions of the PRIIA related to high speed rail and equitable consideration of high speed rail as a potential option to address demands on the nation’s transportation system, we recommend that the Secretary of Transportation, in consultation with Congress and other stakeholders, take the following three actions: Develop a written strategic vision for high speed rail, particularly in relation to the role high speed rail systems can play in the national transportation system, clearly identifying potential objectives and goals for high speed rail systems and the roles federal and other stakeholders should play in achieving each objective and goal. Develop specific policies and procedures for reviewing and evaluating grant applications under the high speed rail provisions of the PRIIA that clearly identify the outcomes expected to be achieved through the award of grant funds and include performance and accountability measures. Develop guidance and methods for ensuring reliability of ridership and other forecasts used to determine the viability of high speed rail projects and support the need for federal grant assistance. The methods could include such things as independent, third-party reviews of applicable ridership and other forecasts, identifying and implementing ways to structure incentives to improve the precision of ridership and cost estimates received from grant applicants, or other methods that can ensure a high degree of reliability of such forecasts. We provided copies of our draft report to DOT for comment prior to finalizing the report. DOT provided its comments in an e-mail message on March 10, 2009. DOT said that it generally agreed with the information presented and noted that with the passage of ARRA, its work on high speed rail has been considerably accelerated. Specifically, the act calls for FRA to submit, within an expedited time frame, a strategic plan to the Congress describing how FRA will use the $8 billion funding identified in the act to improve and deploy high speed passenger rail systems. DOT indicated that the strategic plan may include the Department’s vision for developing high speed rail services, criteria for selecting projects, an evaluation process that will be used to measure effectiveness, and a discussion of the relationship between the ARRA grant programs and the recently enacted PRIIA. DOT said it is also working to comply with statutory requirements to issue interim guidance in June 2009, describing grant terms, conditions, and procedures. DOT told us that in order to provide information to the public and potential grantees as expeditiously as possible, it has posted a set of questions and answers relating to ARRA on its Web site. These questions and answers provide potential program applicants with some preliminary but specific information on what to expect in terms of coverage, limitations, and potential selection criteria. Finally, DOT noted that the draft report does not include information relating to the administration’s new federal commitment to high speed rail. Specifically, as described in the President’s proposed fiscal year 2010 budget, the administration has proposed a 5-year $5 billion high speed rail state grant program. DOT indicated that this program is intended to build on the $8 billion included in ARRA for high speed rail. The Department said the President’s proposal marks a new federal commitment to practical and environmentally sustainable transportation. DOT did not take a position on our recommendations. We agree that the recently enacted ARRA will likely accelerate activity related to the consideration and development of high speed rail in the United States and will place a new emphasis on the federal role in such development. We also agree that the President’s proposed fiscal year 2010 budget, if enacted, could further increase the emphasis on high speed rail and its potential development. As discussed in the report, high speed rail systems can offer a number of benefits. However, these systems are very expensive, can take a long time to develop, and face numerous financial and other challenges to bring to fruition. Given the renewed interest in high speed intercity passenger rail and its development and the substantial resources that might be made available, it is even more important that potential challenges are addressed and a clear federal role be established. This includes developing a strategic vision for high speed rail that includes consideration of how high speed rail fits into the nation’s transportation system; that the review and evaluation of grant applications under PRIIA, ARRA and other programs clearly identify outcomes to be achieved and incorporate into grant documents appropriate performance and accountability measures to ensure these outcomes are achieved; and that guidance and methods be developed that increase the reliability of ridership and other forecasts used to determine the economic viability of high speed rail projects. Each of these actions is essential for ensuring that federal expenditures on high speed rail are efficient, effective, and focused on maximizing the return on the investment. We also received comments from Amtrak in an e-mail message dated March 3, 2009. Amtrak said it generally agreed with our conclusions. Amtrak did not take a position on our recommendations. Amtrak also provided technical corrections and comments, which we incorporated where appropriate. We are sending copies of this report to the appropriate congressional committees; the Secretary of Transportation; the Administrator of the Federal Railroad Administration; and the Director of the Office of Management and Budget. The report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VIII. To better understand the potential viability of high speed rail service in the United States, we reviewed (1) the factors affecting the economic viability of high speed rail projects—that is, whether a project’s total social benefits offset or justify the total social costs—and difficulties in determining the economic viability of proposed projects; (2) the challenges that U.S. project sponsors experience in developing and financing high speed rail projects; and (3) the federal role in the potential development of high speed rail systems. For the purposes of this report, we used the Federal Railroad Administration’s (FRA) definition of high speed ground transportation, which is “service that is time-competitive with air and/or automobile for trips in corridors of roughly 100 and 500 miles in length,” as opposed to a specific top speed threshold. As a result, we included in our review a wide range of projects, including “incremental” projects that are designed to increase the speed (generally above 79 miles per hour up to 150 miles per hour) or reliability of existing rail service on existing track usually shared with freight or other passenger trains; and “new” high speed rail projects (above 150 miles per hour and, in some cases, above 200 miles per hour) designed to operate on new tracks or guideway on dedicated right-of-way not shared with other rail services. Our review was technology neutral, meaning that we did not analyze or consider the technical feasibility of diesel, electrified, or magnetic levitation trains, but only considered the service and performance aspects of the different technologies in the project proposals we reviewed. The scope of our work did not include an assessment of commuter rail or transit service where the primary purpose is to travel between a suburb and a city center or within a metropolitan area. However, the presence of these transportation modes as intermodal connections to high speed rail service was considered in identifying characteristics significant to how proposed high speed rail service is analyzed and evaluated. Furthermore, it was not the intent of this study to identify specific routes or corridors that are viable. Rather, this study identifies characteristics of corridors and service and other factors that contribute to a proposed project’s benefits and costs and the challenges in developing and financing such projects. To address our objectives, we conducted structured interviews with officials for 5 projects that currently exceed Amtrak’s predominant top speed of 79 miles per hour, and project sponsors for 11 different high speed rail proposals in the United States. The criteria used to select which existing or proposed domestic projects to review were twofold, as follows: 1. The project’s planned or existing high speed rail service must include operating at a top speed greater than 79 miles per hour (generally the top speed for intercity passenger trains). 2. The project’s planned service must be supported by a completed environmental review (or equivalent project review) that would make the project eligible for federal funding, or the project sponsor needed to be actively pursuing the completion of such a review. To identify projects for inclusion in our study, we reviewed a recent survey of high speed rail projects in 64 corridors across the United States to identify potential projects. The survey identified 16 projects that met our criteria. To verify this information, we contacted project sponsors, or another project affiliate, for each of these projects (16 projects). We also contacted project sponsors (or another project affiliate) for the remaining projects in the survey to verify that they had not advanced in their planning process since issuance of the survey report, such that they would now meet our criteria. As a result of this verification, one additional project was included in our study, and two projects were dropped since they had either not progressed to the environmental review phase or were not being pursued for high speed rail. We also added another project (Los Angeles, California, to San Diego, California) that met our criteria on the basis of discussions with Amtrak. The latter project is separate from the California High Speed Rail Authority’s statewide high speed rail initiative, which also plans to serve San Diego from Los Angeles. All 5 existing projects were incremental projects, and of the 11 proposed projects included in our review, 6 were incremental improvements to existing rail service in a corridor, and the remaining 5 projects would implement service on new high speed track or guideways using dedicated right-of-way. Three of the 5 dedicated right-of-way projects were considering magnetic levitation technology at the time of our study. To collect information about the high speed rail projects in development, we conducted structured interviews with each project sponsor. The interviews were structured to identify such things as (1) the important characteristics and factors that affect a project’s viability; (2) the most important challenges faced by project sponsors in developing the project; and (3) the roles of various federal, state, local, and private sector entities in the development of the project. We pretested the structured interview instrument and made changes based on the pretest. These changes included additional questions about project development and background and stakeholders involved with the project. In addition, we requested and reviewed any available data on ridership forecasts and evaluations, project cost estimates and evaluations, costs to construct and maintain any existing high speed rail service as well as any environmental reviews, transportation plans, and other studies associated with the projects. Information about the projects was shared with project sponsors to ensure its accuracy. We also conducted case studies of international high speed rail systems in France, Japan, and Spain. In selecting these three countries, we considered a number of factors, including location, how long high speed rail has been in service, and the availability of data and other information. At the time of our visit, France and Spain had the highest kilometers of high speed rail lines in Europe. Japan similarly had extensive high speed rail lines and was one of the first countries to implement high speed rail service. We conducted interviews in these countries with relevant government officials, including transportation bureaus and embassy officials; high speed rail infrastructure owners and service operators; and other stakeholders, including academic professors and domestic airline carriers or their trade associations. We requested and reviewed any available data on ridership forecasts and evaluations, project cost estimates and evaluations, as well as the costs to construct and maintain high speed rail service in these countries. We also reviewed relevant literature and studies on high speed rail systems in these and other countries. To the extent available, we reviewed relevant laws, directives, and guidance related to high speed rail systems in France, Japan, and Spain, and the European Union. The information presented in this report on international high speed rail systems, however, cannot be generalized beyond these three countries. To further identify the challenges encountered by previous high speed rail projects in the United States, we conducted a case study analysis of two terminated domestic high speed rail projects: the Florida Overland Express (FOX) and the Texas TGV. To conduct the case study analyses, we interviewed stakeholders affiliated with the projects and reviewed documents, such as legislation, ridership studies, and other research materials related to the projects. To further address our objectives, we obtained and analyzed information from a variety of other sources, including reports and documentation from FRA, the Department of Transportation (DOT), Amtrak, and the Surface Transportation Board; prior GAO work; and other evaluations and studies on transportation infrastructure projects and high speed rail service. In addition to our structured interviews and international case studies, we conducted over 90 interviews covering a wide range of stakeholders and interested parties, including officials at FRA, DOT, Amtrak, the Surface Transportation Board, state and local government agencies and organizations, academics, consultants involved in high speed rail ridership forecasting and planning, representatives from private equity firms that invest in transportation infrastructure, and engineers involved in developing various rail technologies. To review how characteristics of corridors and proposed service identified in our structured interviews and international case studies compare with other corridors in the United States and internationally, we obtained and analyzed data on corridor and service characteristics from numerous sources, including the DOT’s Bureau of Transportation Statistics, the Census Bureau, and other domestic and international academic studies and government reports. We used standard tests and methodologies to ensure reliability of the data collected. This included reviewing the data for abnormalities, omissions, and obvious errors and corroborating information obtained to the extent possible. These data are not intended to make definitive conclusions on viability, but rather to allow us to make reasonable comparisons using the best available data. For example, variations exist in how data sources report population numbers based on differences between the geographical definitions of cities, metropolitan and other areas. In trying to maintain consistency, we attempted to use the same population data source for international corridors, but this was not always possible. To further assess the roles and relevant interests of national and state government agencies and officials, and the private sector in planning, developing, and operating high speed rail projects, we reviewed applicable federal laws and regulations. This included analyzing selected high speed rail legislation from 1965 to 2008, including the Passenger Rail Investment and Improvement Act of 2008. The latter includes a review of the high speed rail provisions contained in the act, the role of the Secretary of Transportation in relation to these provisions, and application procedures for federal high speed rail grants. We conducted this performance audit from December 2007 to March 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Projects and improvements associated with Amtrak’s 456-mile Northeast Corridor began in the 1970s. This included the Northeast Corridor Improvement Program and the Northeast High Speed Rail Improvement Program. Improvements included electrifying the line from New Haven Connecticut, to Boston, Massachusetts, enhancing signaling systems, and acquiring new high speed rail trainsets called Acela Express. The average speed from Washington, D.C., to New York City, New York, is 82 miles per hour, and the top speed is 135 miles per hour. The average speed from New York City to Boston is 68 miles per hour, and the top speed is 150 miles per hour. $3.8 billion (estimated since 1990) $3.8 billion (estimated since 1990) Project is open for passenger operations. Amtrak, in conjunction with the nine states along the corridor, is currently developing a master plan for the corridor that includes additional capital improvements. Amtrak began operating on the Los Angeles to San Diego corridor in 1971. When Amtrak began operations, the passenger trains were already capable of maximum speeds of 90 miles per hour on segments between Santa Ana in Orange County and the Sorrento Valley because of an automatic track signaling system that was already in place. Average speed along the 130- mile corridor is approximately 55 miles per hour. Passenger rail operations are under way with top speeds of 90 miles per hour in certain segments; however, continuing capital improvements are occurring along the corridor to increase total average speed. From 1977 to 1997, the New York State Department of Transportation made a series of incremental improvements to existing passenger rail service between New York City and Albany/Schenectady along the Empire Corridor, which stretches to Buffalo. Doing so has allowed for passenger rail service to operate at a top speed of 110 miles per hour and an average speed of between 80 and 90 miles per hour along the 158-mile corridor. $97.2 million (actual) $97.2 million (100 percent from state funds) Intercity passenger rail operations are currently under way with a top speed of 110 miles per hour. The New York State Department of Transportation is planning on making $22 million in additional incremental corridor investments, and is also anticipating new federal funding to make further improvements. The Keystone Corridor Improvement Program consisted of making incremental improvements (e.g., track work, bridge repairs, communication and signaling improvements, and enhanced power generation) along the Harrisburg to Philadelphia corridor to allow for speeds of up to 110 miles per hour. $145.5 million (actual) $145.5 million (50 percent from Amtrak, 40 percent from FTA, and 10 percent from state funds) Intercity passenger operations are currently under way with a top speed of 110 miles per hour. There are currently discussions under way to plan for a second phase of improvements for the corridor. Implementation of a positive train control system on 55 miles of Amtrak- owned right-of-way (Kalamazoo, Michigan, to about the Indiana state line) along the Chicago, Illinois, to Detroit, Michigan, corridor. Improvements to signaling and communication systems will allow Amtrak to operate up to a top speed of 110 miles per hour along the 55-mile stretch. $39 million (actual) $39 million (49 percent from FRA, 27 percent from state, and 24 percent from Amtrak and General Electric) From Kalamazoo, Michigan, to Niles, Michigan, trains operate at 95 miles per hour. From Niles, Michigan, to a point 20 miles west, positive train control equipment is installed but is currently in the process of getting approval from FRA for its use. Amtrak is currently testing a new radio system with different frequencies. When testing is complete and the radio system is installed, passenger rail operations would be able to operate at 110 miles per hour along the 55 mile-test bed. Appendix III: D High Speed Environescription of Current U.S. The project will connect Atlanta, Georgia, to Chattanooga, Tennessee, along a combination of new right-of-way, rail right-of-way, and highway right-of-way with a new high speed rail system. The length is approximately 110 miles between the two cities. The envisioned system is expected to operate at a top speed of 200 miles per hour and an average speed of 180 miles per hour. To be determined. Project sponsors are considering both magnetic levitation and electrified steel-wheel on steel-rail technology. The preferred technology will be recommended as a result of the program level environmental impact statement. Funding for the feasibility study, which was conducted by the Atlanta Regional Commission, was provided through the Transportation Equity Act of the 21st Century (TEA-21). Additional funding was authorized by the Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users (SAFETEA-LU), to study various transportation technologies as well as through the SAFETEA-LU Technical Corrections Act of 2008. Georgia Department of Transportation officials noted they were half way through the 36-month program level environmental impact statement. The department plans to have a record of decision on the program level environmental statement by 2010. The Baltimore, Maryland, to Washington, D.C., project is a magnetic levitation project that plans to connect the two cities, with a planned stop at Baltimore-Washington International Airport. The length is 40 miles between the two cities. The system is expected to operate at a top speed of 250 miles per hour and an average speed of 125 miles per hour. $5.15 billion (projected - 2007) The project completed a preliminary feasibility study in 1994 in response to the Maglev Prototype Development Program created by the Intermodal Surface Transportation Efficiency Act of 1991. In 1998, the project was one of the seven projects selected and funded for study by the FRA as part of the Maglev Deployment Program. In 2001, FRA selected this project to receive funds for a draft environmental impact statement as part of the TEA-21 Maglev Deployment Program. In 2003, a draft environmental impact statement was completed and accepted by FRA. In October 2007, a draft of the final environmental impact statement was submitted to FRA. FRA has requested additional information as part of their review of this statement. Project sponsors are pursuing funding under the SAFETEA-LU Technical Corrections Act of 2008 to complete the final environmental impact statement. This project is planned to connect Las Vegas, Nevada, to Anaheim, California, with stops in Ontario, Victorville, Barstow (California) and Primm (Nevada) with a magnetic levitation system. The length is 269 miles between Anaheim, California, and Las Vegas, Nevada. The initial segment to be developed is 40 miles from Las Vegas to Primm, Nevada. The system is expected to operate at a top speed of 311 miles per hour and an average speed of between 150 and 200 miles per hour. FRA California Nevada Super Speed Train Commission (created by California and Nevada legislatures) $12 billion (projected - 2005) The commission recently received $45 million from the SAFETEA-LU Technical Corrections Act of 2008, of which the commission will need to provide 20 percent matching funds. Work on the environmental impact statement is continuing, as is design/engineering work and preparation of cost estimates. Project sponsors expect to issue a fixed price contract to construct this project. The commission continues to have legislative authority in Nevada, but its authorizing legislation in California was allowed to lapse. However, according to the commission, the project enjoys strong support in California, and is supported by the California Department of Transportation in preparation of the environmental impact statement. The Desert Xpress is a high speed rail project intended to connect Las Vegas, Nevada, with Southern California through a station in Victorville, California, a city that is less than 50 miles east of Palmdale where an intermodal station is planned on the California High Speed Rail system; 35 miles northeast of Ontario International Airport; and 80 miles northeast of downtown Los Angeles. The system is planned to operate on a new dedicated right-of-way. The distance between Victorville, California, and Las Vegas, Nevada, is approximately 183 miles. Project sponsors expect to operate at a top speed of 150 miles per hour and an average speed of 125 miles per hour. Project sponsors also expect to construct the project using existing highway right-of-way and using public lands owned by the Bureau of Land Management. Desert Xpress is being implemented by a private sector entity without public funding. $3.5 billion (projected - 2003) No public funding has been expended. All funding to-date has come from Desert Xpress Enterprises. The draft environmental impact statement is currently being developed and is scheduled for publication in early 2009. Desert Xpress officials expect the final environmental impact statement to be completed in July 2009 with a final record of decision issued by the federal government shortly thereafter. The California High Speed Rail Authority is pursuing a statewide high speed rail system in California. Phase 1 of system will be from Anaheim, California, to Los Angeles, California, then through California’s Central Valley, and through the Pacheco Pass to the San Francisco Bay Area. Phase 2 will include extensions to Sacramento, California, and San Diego, California. Phase 1 of the system is 520 miles, and the authority expects the service will operate at a top speed of 220 miles per hour. Authority officials did not provide an average speed. A predominantly dedicated right-of-way electrified steel-wheel on steel-rail system. According to the authority, about 10 percent of the line will be shared with other rail services. FRA California High Speed Rail Authority (created by the California legislature) $32.8 - $33.6 billion for Phase 1 of project (projected - 2008) $9.95 billion in state bond funding (in addition to state support provided for administration of California High Speed Rail Authority) As of July 2008, all program-level environmental review work has been completed. The authority is now undertaking the project-level review and approval process. In addition, on November 4, 2008, California voters approved a ballot initiative that allows the state to issue $9.95 billion in bonds for transit and other projects, $9.0 billion of which will go for development of the statewide high speed rail system. Authority officials said they plan to seek additional funding from the federal government and private sector, as well as from local governments for the construction of the system. The Virginia Department of Rail and Public Transportation is pursuing improved passenger rail service between Richmond, Virginia, and the Hampton Roads area of Virginia (Norfolk, Newport News, and other cities). This service will ultimately connect to the Northeast Corridor in conjunction with development of the Southeast High Speed Rail Corridor. This project will use existing right-of-way. Depending on the preferred alignment, the length of the corridor could be 74 miles or 93 miles, with a planned top speed of 90 miles per hour. On December 14, 1995, FRA administratively extended the Southeast High Speed Rail Corridor from Richmond, Virginia, to Hampton Roads, Virginia. Portions of the draft environmental impact statement were sent to FRA for review in spring 2008. The project sponsor is currently awaiting FRA’s response. The federally designated Pacific Northwest Rail Corridor stretches from Vancouver, British Columbia, Canada, to Eugene, Oregon, a distance of 466 miles. The Washington State Department of Transportation is pursuing incremental improvements to intercity passenger rail service between Portland, Oregon, to Vancouver, British Columbia, Canada, a distance of 341 miles. Improvements include upgrading grade crossings, improving tracks and facilities, enhancing the signaling system, purchasing passenger train equipment and improving stations, which would allow the top speed to be 110 miles per hour. Nonelectric locomotives on existing freight railroad right-of-way, with minor alignment changes as needed. $6.5 - $6.8 billion (2006 – projected) Current intercity passenger rail operating speeds are at or below 79 miles per hour and, according to the department, increases in speed will require a new signaling system along the corridor, although increases in frequencies and travel times have occurred due to capital investments in the corridor. This is an extension of New Jersey Transit service to Scranton via existing railroad right-of-way. The corridor is 133 miles and work will include refurbishing 28 miles of abandoned railroad right-of-way. The top speed is expected to be 110 miles per hour, with an average speed of just under 80 miles per hour. $551 million (projected) According to project sponsors, $21 million in federal funding has been received, primarily through earmarks in legislation. Project received a Finding of No Significant Impact by the FTA and, according to one of the project sponsors, is ready to begin construction upon availability of funding. The project sponsors are currently working on a bistate funding agreement to allocate Pennsylvania’s and New Jersey’s share of funding. This project includes making track, station, bridge, and culvert improvements along the Chicago, Illinois, to Minneapolis/St. Paul, Minnesota, corridor, with stops in Milwaukee and Madison, Wisconsin. Enhanced passenger rail service, along existing railroad right-of-way, is being pursued for a top speed of 110 miles per hour and average speeds of between 66 and 70 miles per hour. $1.5 billion (2002-projected) The environmental review for the Madison to Milwaukee segment is complete, and FRA has issued a Finding of No Significant Impact. Engineering design work is complete for the Madison to Milwaukee segment. In addition, updates to ridership and cost estimates were recently completed for the full project. A grant of $5 million from FRA’s Capital Assistance to States—Intercity Passenger Rail Service Program will be used to complete track work between Milwaukee and the Illinois state line. The Wisconsin Department of Transportation has also applied for federal funds to improve highway-rail grade crossings between Madison and Watertown. The Illinois Department of Transportation said numerous incremental improvements have been made along this corridor to allow for increased speeds. This includes track work and grade crossings on 118 miles of track between Mazonia, Illinois, and Springfield, Illinois, completed in 2004. In addition, the department is currently pursuing three phases of improvements: a new cab signaling system (similar to the signaling system used by the private freight carrier that owns this corridor); track work that has been completed in Springfield, Illinois; and a centralized traffic control system for the Joliet, Illinois, to Mazonia, Illinois, segment of the corridor. $125 million (actual) According to project sponsors, $125 million in funding has been received to date (28 percent from FRA, 56 percent from the states (Illinois and Missouri), and 16 percent from private entities). A $1.55 million Capital Assistance to States—Intercity Passenger Rail Service Program grant was received that will be used to continue work on the project. Planned top speed is 110 miles per hour between Joliet, Illinois, and Mazonia, Illinois. The Washington, D.C., to Charlotte, North Carolina, corridor, which is 468 miles in length, will connect to the Northeast Corridor. Both Virginia and North Carolina have established an interstate compact to pursue this project. The project will make incremental improvements to existing infrastructure, including track, route alignment, signaling systems, highway-rail grade crossings, stations, train equipment, and facilities. These improvements will allow a top speed of 110 miles per hour and an average speed of between 85 to 87 miles per hour. In 1992, FRA designated the corridor as a federal high speed rail corridor. $3.8 – 5.3 billion (2011 to 2016 - projected) Over $300 million in state and federal funds have been invested in the Washington to Charlotte portion of the corridor since 1999. The program-level environmental impact statement has been completed for this project. Project sponsors are currently in the process of preparing the project-level environmental impact statement for the Richmond, Virginia, to Raleigh, North Carolina, segment of the corridor. This statement has been in development since 2003 and is expected to be available for public review in the summer of 2010. Proposed using an electrified high speed rail system similar to the French Train à Grande Vitesse (TGV) system, capable of operating at a maximum speed of 200 miles per hour. The preliminary cost estimates ranged from $6 billion to $8 billion, depending on the route chosen. In general, the FOX Consortium planned on the system costing about $6 billion (in 1997 dollars). The FOX project would have operated along a 320-mile long dedicated right-of-way from Miami, Florida, to Tampa, Florida, via Orlando, Florida. The project was planned to serve seven stations: Miami International Airport, Fort Lauderdale, West Palm Beach, Orlando International Airport, Orlando Attractions, Lakeland, and Downtown Tampa. In total, the FOX Consortium planned to raise $9.3 billion to finance the estimated $6.3 billion needed for construction. The additional $3 billion accounts for inflation and to pay for such things as interest on state and system infrastructure bonds during the construction period, establish reserve funds required by bondholders, and cover the costs of issuing the bonds. According to the FOX Consortium, the following sources were expected to provide the $9.3 billion in funding: State contributed equity – $256 million (3 percent) FOX Consortium contributed equity – $349 million (4 percent) Train equipment financing – $569 million (6 percent) Interest earnings and balances – $588 million (6 percent) Federal loan – $2.0 billion (22 percent) State infrastructure bonds – $2.146 billion (23 percent) System infrastructure bonds – $3.346 billion (36 percent) KPMG Peat Marwick projected annual ridership of 8 million passengers by 2010. Systra projected ridership of 8.5 million by 2010. The consensus average of the two ridership studies was approximately 8.3 million passengers by 2010. Table 5 shows the timeline of events in the development of high speed rail in Florida. A new electrified, steel-wheel on steel-rail high speed rail system similar to the French TGV system. The cost estimate was $4 billion. The high speed rail system would have provided service to Dallas, Fort Worth, Dallas/Fort Worth Airport, Houston, Austin, and San Antonio. The initial service between Dallas/Fort Worth and Houston would have begun in 1998, and subsequent service between San Antonio and Austin to Dallas would have begun by 1999. Special or limited service would have been provided to Bryan/College Station and Waco if it were determined to be economically feasible. In addition, service from Houston to San Antonio would have been provided if it were determined to be economically feasible. We were not able to obtain a complete financing plan. The 1993 security offering was for $200 million in notes, backed by a $225 million letter of credit from the Canadian Imperial Bank of Commerce and a $75 million counter-guarantee to be provided by Morrison Knudsen Corporation (one of the original project developers). The Texas High Speed Rail Authority Act prohibited use of public funds for constructing the system, and, as a result, all construction costs would have been privately financed. Based on the five route alternatives, ridership projections by 2015 ranged from 11.3 million to 18.0 million. Table 6 shows the timeline of events in the development of high speed rail in Texas. France first developed high speed rail lines with the opening of the TGV Sud Est line from Paris to Lyon in 1981. Since then, France has constructed additional high speed rail lines connecting major cities in France, as well as connecting high speed rail lines to cities in Germany, Belgium, and the United Kingdom. The French railway system has undergone a couple of major reforms, the most notable one occurring in 1997, with the creation of Réseau Ferré de France (RFF), France’s national intercity rail network infrastructure manager. This reform took place as France had to comply with European Union directives, which required the separation of passenger operations and infrastructure management. In addition, the ownership of the rail network, including the high speed rail network, was transferred from the national government to RFF. RFF is also responsible for capacity allocation, contracting, traffic management, and maintenance, although it subcontracts the traffic management and maintenance to the passenger rail operator, Société Nationale des Chemins de Fer Français (SNCF). The Ministry of Ecology, Energy, Sustainable Development, and Spatial Planning sets policy, enforces laws and regulations, and approves and finances projects. Moving forward, France is pursuing a high speed rail plan on the basis of a recommendation from a national environmental conference (Le Grenelle Environnement), which called for investments in sustainable transportation modes. Specifically, it recommended building about 1,200 miles of additional high speed rail lines before 2020 and studying the viability of another approximately 1,500 miles of high speed rail lines. Snapshot of the French High Speed Rail System Date of initiation: 1981 Length of high speed rail system: 1,180 miles Top commercial speed: 199 miles per hour High speed rail ridership: Approximately 100 million (2007) Prior to the creation of RFF in 1997, most of the funding for the construction of high speed rail lines came from the national government (through SNCF). Since then, funding for high speed rail construction is derived from a variety of sources, including the national government, regional governments, RFF, SNCF, and the European Union. SNCF is the sole provider of domestic high speed rail operations in France. The Eurostar and Thalys TGV, of which SNCF is a shareholder, provide international high speed rail operations to locations in Belgium, Holland, and the United Kingdom. According to European Union directives, international high speed rail lines must be opened for competition starting in 2010. Therefore, France will be required to allow private and public competitors to operate their trains over these lines. In terms of track ownership, RFF is an owner of all intercity railway property in France. RFF is also responsible for allocating capacity for the high speed rail infrastructure and for the maintenance and management of traffic of the high speed rail system. However, these responsibilities have been subcontracted to SNCF. SNCF pays RFF infrastructure fees to use the high speed rail lines. Japan was the first country in the world to develop high speed rail operations, which occurred in 1964 with the opening of the Shinkansen between Tokyo and Osaka. In addition, in 1970, the Nationwide Shinkansen Railway Development Act was established, which created a master plan for the expansion of high speed rail lines throughout Japan. After this, four high speed rail lines were constructed prior to the 1987 reform of the passenger rail industry in Japan. The 1987 reform broke the fully integrated state railway entity, Japanese National Railways, into six private intercity passenger rail operators based on six distinct geographic regions, as well as a freight operator. Since then, three high speed rail lines have been built under the reformed structure, and the high speed rail system continues to expand on the basis of the high speed rail master plan. Snapshot of the Japanese High Speed Rail System Date of initiation: 1964 Length of high speed rail system: 1,360 miles Top commercial speed: 188 miles per hour High speed rail ridership: Approximately 300 million (fiscal year 2006) Prior to the 1987 reform, the construction of high speed rail in Japan was funded through debt incurred by the national government and Japan National Railways. After the 1987 reform, the national government funds two-thirds of the construction cost, and local governments fund one-third of the construction cost under the Nationwide Shinkansen Railway Development Act. The national government funding is derived from the revenues from the sale of rail lines to private companies and the national public works budget. Private companies purchased the four constructed high speed rail lines from the national government in 1991, and in turn the companies have to pay an annual fee to the national government for 60 years. For high speed rail lines built after the 1987 reform, the companies pay a lease payment to the Japan Railway Construction, Transportation, and Technology Agency for the use of the high speed rail lines, on the basis of projected ridership. The national government does not provide operating subsidies for high speed rail passenger operations. Prior to the 1987 reform, Japan National Railways was a fully integrated state-owned entity that was the sole high speed passenger rail operator in Japan. After the 1987 reform, Japan National Railways was split into six private operators, three are on the mainland (JR East, JR Central, and JR West) and the other three are each on an island (JR Hokkaido, JR Shikoku, and JR Kyushu). JR East, JR Central, JR West, and JR Kyushu operate high speed rail lines. JR East operates Shinkansen lines between Tokyo and Nagano, Tokyo and Niigata, and Tokyo and Hachinohe; JR Central operates the Shinkansen line between Tokyo and Osaka; JR West operates the Shinkansen line between Osaka and Fukuoka; and, JR Kyushu operates the Shinkansen line between Kagoshima and Shin Yatsushiro. The three mainland operators are considered fully privatized entities. High speed rail lines built after the 1987 reform are constructed and owned by the Japan Railway Construction, Transportation, and Technology Agency, and are leased to the JR companies. As a result of the 1991 law, JR East purchased the high speed rail line from Tokyo to Niigata and the track from Tokyo to Morioka. JR Central purchased the high speed rail line from Tokyo to Osaka, and JR West purchased the high speed rail line from Osaka to Hakata. Spain first developed high speed rail lines with the opening of the Madrid to Seville line in 1992. Since then, Spain has constructed additional high speed rail lines from Madrid to Barcelona and Madrid to Valladolid, in 2007 and 2008, respectively, and from Córdoba to Málaga, with extensions built off these main lines as well (i.e., to Toledo in 2005). The construction of these lines was based on a national rail plan created in 1987 and national transportation plans created in 1993, 1997, and 2005. In 2005, Spain’s railway system was restructured in accordance with the European Union directive requiring the separation of passenger operations and infrastructure management. In accordance with these directives, Spain passed its own legislation, which split its state railway entity, Renfe, into two entities, Adif and Renfe-Operadora. Adif is responsible for infrastructure management and capacity allocation, and Renfe-Operadora is responsible for passenger operations. The Ministerio de Fomento (Ministry of Public Works) is responsible for setting policy, enforcing laws and regulations, and approving and financing projects. Spain’s most recent national transportation plan calls for $103.9 billion in investment for creating 5,592 miles of high speed rail lines. Snapshot of the Spanish High Speed Rail System Date of initiation: 1992 Length of high speed rail system: 981 miles Top commercial speed: 186 miles per hour High speed rail ridership: 9 million (2007) Spanish transportation officials with whom we spoke noted that a majority of funding to construct the Madrid to Seville high speed rail line was provided by the national government. Of the high speed rail lines built since then, construction costs have been derived from funding from the national government, the European Union, and Adif. Moving forward, it is planned that funding for expansion of the existing high speed rail network will be derived from the national government, local governments, Adif, and loans from the European Investment Bank. For cross-border high speed rail lines, it is also planned that funding will be derived from the European Union as part of the Trans-European Transport Network. Renfe-Operadora is the sole provider of high speed rail operations in Spain. According to European Union directives, international high speed rail lines must be opened to competition starting in 2010. Therefore, Spain will be required to allow private and public competitors to operate their trains over these international lines. In terms of track ownership, Adif owns the current high speed rail lines as well as passenger rail stations, freight terminals, and the telecommunications network. In addition, Adif constructs and maintains high speed rail lines, allocates capacity to passenger rail operators, and manages traffic control operations and safety systems. Renfe-Operadora pays Adif infrastructure fees to use the high speed rail lines. The benefits of a proposed project depend on the popularity of a new service, that is, high ridership. Thus, a critical factor in determining the net benefits, or viability, of a proposed project is its ridership forecasts. Ridership forecasts are generally conducted by modeling travel demand for the corridor in which the new service is being proposed. Travel demand modeling can be conducted at the macro level or the micro level, depending on the types of available data and the level of information needed from the results of the model. The use of travel demand models in the policy process could be conceived of in terms of the following three activities: data collection, model building, and estimation. 1. Data collection: Aggregate data refers to variables that summarize the characteristics of a group of individual units, such as an average, a total, or a median. Examples include per capita income or vehicle miles traveled. An aggregate model is founded on such data. In the case of travel demand, the analysis applies to those residing or doing business in a region. Data sources are typically official statistics routinely collected by public agencies, including administrative data. An advantage of such data is that they are inexpensive for the secondary user and have been subjected to some degree of quality control by the originating agency. One disadvantage is that the results of any analysis do not necessarily apply to a specific transportation project. Another limitation is that the model is limited by the available data. Micro models can help inform specific policy changes, such as the option of adding high speed rail service to a transportation system. Micro data refers to individuals’ characteristics and behavior. These data are often gleaned from surveys of travelers or households. Micro data, however, are generally expensive to obtain, their collection may be limited due to privacy considerations, and their quality depends on the sophistication of the survey methodology. A danger in survey data, just as in political polling, is that the design or implementation of a survey could lead to biased survey results. Survey instruments can be scrutinized by third parties, but the process of data collection is less accessible to outside observers, especially after the fact. Typically, a survey, as well as ensuing analysis, will be commissioned by the public agency that is sponsoring a project, raising conflict-of-interest concerns. Surveys can provide ambiguous results for innocent reasons as well (e.g., such results may be due to differences in methodology). 2. Model building: Constructing a formal travel demand model generally entails a number of choices and professional judgment. For example, a modeler usually makes choices on the theory and assumptions upon which the model is based, the mathematical form of the model, and the variables to be included. Because models entail professional judgment, many models are sufficiently diverse (e.g., include differing assumptions) such that alternative models of the same problem can yield different results. Also, alternative theories of travel demand could imply different models with diverse findings. Models with conflicting rationales can both claim legitimate empirical support. In predicting future demand for an existing or new transportation facility, two types of data are typically involved: historic and prospective. A model is often initially developed using historic data. The effects and implied outcomes of the model are then compared with actual experience to test the structure of the model (e.g., the theory and assumptions on which the model is based). Details of the model may be adjusted to improve the results—that is, to make the modeled effects more closely match actual experience. Once a model has provided satisfactory results, it may be deployed with data on projected future conditions. Again, forecast modelers may adjust and readjust the structure of the model. The use of statistical methods in testing models is usually a trial-and-error process, thus, rarely is the first result the end of the study. 3. Estimation: There are usually multiple criteria by which to analyze or interpret the results of a model, and the analyst enjoys considerable discretion in determining the direction of the analysis. In addition, the foundation of the analysis is survey data, and the data collected could yield results dramatically at variance with theory, expected empirical impacts, and past experience. For these reasons, the nature of the data and the decisions on how to handle them may enable the analyst to steer the result in the analyst’s preferred direction. For an external, disinterested reviewer, the evolution of such decisions is very difficult to trace. Because circulation of data and models for outside review may be restricted by proprietary considerations and the population of private sector organizations equipped to conduct large-scale projects may be sufficiently limited, the evaluation by independent peer reviewers may be difficult. In the intercity context, the standard framework for estimating travel demand has been the four-step model. The four steps are as follows: 1. Trip generation: This step refers to the total number of trips, based on the idea of “productions” (households are the most important source of production) and “attractions” (places of employment or retail establishments are obvious attractors). Trips can have the purpose of moving people or freight, either within a region, to or from a region, or through a region. The main purposes for persons to travel include commuting, business travel, and leisure travel. Thus, household and business patterns of commuting and shopping are the most stable source of information used in modeling, while a more variable source of information are trips aimed at recreation, and other more episodic decisions. Model inputs (or variables) used to explain trip productions include trip purpose (e.g., commuting and home-to-school) household size, auto ownership, and income. Trip attractions are chiefly workplaces and retail outlets. These data can be obtained through records, such as ticket sales, and supplemented by or derived exclusively from surveys. 2. Trip distribution: This step pertains to trips in terms of connected origins and destinations. The standard approach to estimating trip distribution is what is known as a “gravity model.” Gravity models were used in models of trade and migration. In the context of this discussion, trips from point A to point B are positively affected by measures of mutual attraction (i.e., “productions” at point A and “attractions” at point B). The analogy is to Newton’s law of the gravitational force between two bodies: that it increases with the mass of each, and decreases with the distance between them. Trips are negatively affected by some measure of “impedance” or friction affecting the desirability of a trip between the two points, such as distance, travel time, cost of travel, or some combination of such factors. The inputs to a gravity model could be the number of trips originating and ending at a given number of zones. A problem in distribution is the feedback implied by possible congestion or crowding of a transit facility. The more people decide to move from point A to point B, the greater the impedance factor could be, depending on how the factor is represented. This in turn could influence the number of trips. Allowing for feedback requires additional complexity in a trip distribution model. More complex models can be implemented. An alternative approach to the gravity model—where time series data are available—is a model that combines steps one and two. In such a model, the number of trips between a given origin and destination is explained by population levels at each end, travelers’ incomes, and the level of service available for the mode (e.g., rail and automobile) in question. The apparent simplicity of such an approach may obscure the advantages of implementing such a model for the full gamut of trip purposes, in each mode, for each origin- destination pair. 3. Mode choice: This step pertains to the decision on how to travel, such as driving alone, carpooling, or taking some type of public transportation. The probability of choosing among modes is modeled as a function of the characteristics of individuals, trip purpose, and the relative costs of alternative modes, among other possible factors. The estimated probability for a population is the share estimated for a given transportation mode. Obvious factors in the choice of a travel mode include the relative costs, travel time, convenience, and comfort of the travel alternatives in question. The choice of a travel mode interacts with personal decisions on whether to own an automobile, and, if so, how many, and where to reside. This chicken-egg interaction complicates the analysis of causality in mode choice. Mode choice models are founded on microeconomic consumer theory that depends on a bevy of controversial, technical economic assumptions about human behavior. In general, the theory assumes a high degree of rational and consistent behavior on the part of individuals, including foresight, self-discipline, an aversion to risk, and the capacity to process information. Over the past two decades, a growing literature has developed providing empirical evidence against such notions of rationality. The purpose of a mode split analysis is to predict the shares of trips over existing and prospective modes. In principle, the factors that distinguish choices in Europe from those in the United States would be accounted for in the model. For example, if motor fuel in the United States, factoring in the relevant taxes, is cheaper than in Europe, the impact of differences in the costs of trips under different modes would be reflected in the overall explanation of the extent to which travelers might choose high speed rail over automobile and air. A good mode split model will indicate the strengths of the assorted factors, including the preference for one transportation mode over another, assuming all other factors are equal. The extent to which government policies—such as the impact of motor fuels’ taxes on travel costs—influence the choice, can be abstracted from, to assess underlying preferences. 4. Route assignment: The final step of travel demand modeling is to determine the distribution of trips between two given points for all modes over the possible routes between the points. Assuming travelers prefer the route that takes the least time, given decisions about destination and mode, in a regional setting with many zones and a multitude of paths between a multitude of points, a mathematical programming problem of considerable complexity is encountered. Even so, such a problem glosses over the extent of congestion and resulting changes in travel time to which particular routes can be subjected. Reckoning with the associated feedback—travelers on congested routes choose alternatives—adds complexity to the exercise. When travel times are minimized on all routes and no traveler has an incentive to choose yet another alternative, the system is said to be in equilibrium. Uncovering such an equilibrium is a goal of route assignment modeling. Pressure on particular route segments provides information to the policymaker on the possible expansion of the network or the use of tolls to reduce congestion. Three separate high speed rail proposals connecting the Los Angeles, California, and Las Vegas, Nevada, metropolitan areas illustrate the ridership and cost trade-offs that are associated with selecting, among other things, a particular route or train technology. The three options being explored include an incremental improvement to an existing conventional rail line, a high speed electrified (or diesel) steel-wheel on steel-rail line on dedicated track (project sponsor – Desert Xpress), and a magnetic levitation (maglev) proposal on dedicated guideway (project sponsor – California-Nevada Super Speed Train Commission). (See fig. 7.) One selection of route or train technology may maximize ridership and increase construction costs, while another option may draw lower ridership but at a substantially lower cost. Las Vegas is one of the most visited cities in the United States, and, according to project proponents, the mostly flat and desert terrain between Los Angeles and Las Vegas makes high speed rail development relatively straightforward, although some portions of the corridor are mountainous and have steep grades. Project sponsors for each option indicated that a transportation need exists between the two regions, due to capacity constraints on existing transportation modes, significant growth in population and employment, and projections for future growth in the long term. One-third of all visitors to Las Vegas are from California, and more than 10 million visitors are estimated to come from the Southern California area. This travel is estimated to grow substantially by 2030, although the Las Vegas economy has been hit particularly hard by the recent economic crisis, as reflected in the recent decreases in visitor volume. However, according to one project sponsor, travel from Southern California to Las Vegas has not been as severely impacted as visitation from elsewhere, as reflected by traffic counts on Interstate 15 (I-15) at the Nevada state line, which show only about a 5 percent reduction in automobile traffic. High speed rail stakeholders with whom we spoke said ridership on any high speed rail line will be impacted by the location of the rail stations in relation to where potential riders live for all stations along the line, but especially at the ends of the line. Desert Xpress will most likely forgo some ridership by terminating service outside of the Los Angeles area (in Victorville). Because riders must first drive their personal vehicles to Victorville—typically the most congested portion of the automobile trip between the Los Angeles area and Las Vegas—and then board a train, stakeholders have expressed concern regarding the level of risk related to the estimates of riders. Similarly, the maglev project is designed to terminate in Anaheim, which may also result in fewer riders than connecting directly to the more populous Los Angeles area, and similar concerns over risks associated with overly optimistic ridership estimates have been expressed. The conventional rail proposal, while connecting directly into downtown Los Angeles, is plagued by slow speeds and travel times that are not as competitive with automobile or air travel. As such, the conventional rail proposal is likely to attract far fewer riders than the other proposed services. The Regional Transportation Commission of Southern Nevada, the metropolitan planning organization for Southern Nevada, which encompasses Las Vegas, has been focusing on reestablishing conventional rail passenger service between Los Angeles and Las Vegas. Amtrak’s Desert Wind service was discontinued in 1997 as part of a broader restructuring of intercity passenger rail service that included the discontinuation, truncation, or restructuring of service on a number of Amtrak’s routes. The conventional rail option would make incremental improvements to existing rail track (using diesel equipment) and operate in a shared-use environment with commuter and freight trains, and, as such, would require negotiations with the private freight railroads that own the tracks. With the incremental improvements, train speeds would be increased to allow for up to 90 miles per hour. The line would most likely begin in Los Angeles and terminate in Las Vegas—a total length of over 300 miles and an estimated travel time of over 5 hours. Prior passenger rail service on Amtrak’s Desert Wind took approximately 7 hours and 15 minutes between Los Angeles and Las Vegas. The conventional rail option projects to draw approximately 300,000 riders per year, and the estimated construction costs to implement these upgrades would be between $1.1 and $3.5 billion, which would be less than either of the following two options (see table 7 for a comparison of trip times, riders, and costs for all three proposals). The Desert Xpress option would operate on dedicated right-of-way with all new tracks not shared with other rail service with no grade crossings, using steel-wheel on steel-rail electrified (or diesel) equipment, with maximum speeds of up to 150 miles per hour, between Victorville and Las Vegas—a distance of a little less than 200 miles. Travel time between the two cities would be about 84 minutes. Victorville, California—located in San Bernardino County—is the first population center beyond the Cajon Pass from the Los Angeles basin. Traffic from the Los Angeles area funnels onto I-15 south of Victorville. Passengers from the Los Angeles area would need to drive to Victorville to catch the train. According to project sponsors, Victorville is generally within ½ to 1½ hours for many of the more than 20 million residents of the 4 county area (Los Angeles, San Bernardino, Riverside, and Orange). However, according to transportation officials, this segment of the trip can be significantly delayed depending on traffic conditions, in some cases, resulting in travel times to Victorville of up to 3 hours. Therefore, the overall envisioned trip time for a traveler using the Desert Xpress is expected to be between 2 and 3 hours, with the potential to go to over 4 hours on the basis of traffic conditions between Los Angeles and Victorville. According to ridership forecasts prepared for Desert Xpress and reviewed by a third-party contractor, the service is expected to attract up to 16.2 million riders per year by 2030 (8.1 million round trips), and Desert Xpress estimates the total project to cost approximately $3.5 billion. Desert Xpress officials indicate that the project costs are significantly less than most dedicated high speed rail projects, primarily because, by terminating service in Victorville, they would avoid the construction challenges and high costs of building through both the densely populated and developed areas in Los Angeles and Orange counties and the mountainous Cajon Pass. The planned route would also help reduce project costs by mostly using existing right-of-way, running either within or adjacent to the I-15 right-of-way and using adjacent federal lands where the use of highway right-of-way is not possible. The project sponsor is a private entity and would not be seeking any public funding to finance the costs of this project. The California-Nevada Super Speed Train option would operate on dedicated right-of-way, using maglev technology, with maximum speeds of up to 300 miles per hour. The line would begin in Anaheim, California, and terminate in Las Vegas—covering a distance of 269 miles in approximately 1 hour and 20 minutes. Project sponsors indicate that connecting Anaheim (where Disneyland is located) and Las Vegas, two popular tourist destinations, will help them draw significant ridership. The project is also being designed to connect to a new intermodal facility that is planned to be the Anaheim station terminus and would house transit connections to the Los Angeles area, including the proposed Los Angeles to San Francisco high speed rail line. In addition, project sponsors are considering a stop at the Ontario Airport that would allow for a 15-minute trip from Anaheim and, thus, make possible some diversion of air travelers from Los Angeles International and Orange County airports, which are soon to be at capacity. The estimated project costs of over $12 billion is the highest among the three high speed rail options, mostly due to the higher costs of constructing a maglev system. However, project sponsors highlighted some advantages unique to maglev technology, such as lower ongoing projected operation and maintenance costs and its ability to handle steeper grades and curves as compared with steel-wheel on steel-rail technologies. It is estimated that 90 percent of the visitors to Las Vegas from the Southern California region drive on I-15, which is the major highway and the only available driving route connecting Las Vegas and Southern California. According to stakeholders, congestion on I-15 has gotten increasingly worse over the years, with a major choke point occurring in Victorville, where the eight-lane highway narrows to three through lanes in each direction for 30 miles to Barstow, and then to only two through lanes in each direction through the desert to Las Vegas. Travel times between the Los Angeles area and Las Vegas can increase 2 hours or more (from approximately 4 to 6 hours) during weekend and holiday peak travel times (reflective of the recreational nature of most travelers). I-15 is also a heavily traveled freight route between the two regions. Both Desert Xpress and the California-Nevada Super Speed Train Commission anticipate that their high speed rail will help relieve congestion along the I-15 corridor during peak periods. For example, Desert Xpress anticipates that 87 percent of its riders will be diverted from automobiles. However, other stakeholders indicated that none of the current proposals are holistically looking at the transportation problems endemic to the corridor, such as looking at how to most effectively relieve some of the main drivers of traffic congestion in the Southern California area, and as we discussed earlier in this report, high speed rail’s ability to have an impact on highway congestion may be limited by the properties of induced demand and the preferences of drivers. The single largest air market to Las Vegas is from Southern California, and airports in Los Angeles and Las Vegas anticipate reaching and exceeding capacity by 2025. Clark County Department of Aviation officials estimate that in 2007, approximately 3.6 million passengers (15 percent of all passengers) flew in from 1 of the 5 Southern California airports (Los Angeles International Airport, Bob Hope Airport in Burbank, Long Beach Airport, John Wayne Airport in Orange County, Ontario International Airport) servicing Las Vegas’s McCarran International Airport (McCarran). Both Desert Xpress and the California-Nevada Super Speed Train Commission anticipate that their service will draw a significant number of travelers off of planes and into trains. Desert Xpress estimates that just over 12 percent of its passengers will be diverted from air, while California-Nevada Super Speed Train Commission estimates attracting 20 percent of its passengers from air. In addition, as we have previously mentioned, the commission is planning a potential connection to Ontario International Airport to relieve capacity constraints at other Southern California airports. Current airport and highway expansion projects in the corridor also complicate the decision of whether to invest in high speed rail and how to design the system, and highlight the importance of comparing high speed rail proposals with investment alternatives in other modes. However, no single institutional entity exists to consider these investments relative to one another, or in comparison with one another to determine how the transportation needs in the corridor can best be served. For example, two airport projects are currently being developed that will significantly expand airport capacity in the Las Vegas area. To address future projected growth, Clark County Department of Aviation officials said they are preparing to add a third terminal to expand McCarran’s capacity by an additional 8 million passengers. In addition, the department has plans to build a new airport in the Ivanpah Valley, which is 6 miles north of the California state line and 30 miles south of downtown Las Vegas (approximately a 45-minute drive from Las Vegas). McCarran would then handle most of the domestic air travel, while the Ivanpah Airport would handle primarily international air travel. The planned opening of Ivanpah is in 2018, and, at full build-out, the airport is expected to accommodate 30 to 35 million annual passengers. The officials were incorporating the planned maglev line or Desert Xpress line into plans for the new Ivanpah Airport, but primarily as a means to transport international travelers from Ivanpah to the Las Vegas city center. Officials indicated that the existence of a high speed rail line could provide capacity that could delay the need to build the Ivanpah Airport for several years, although eventually they anticipate enough demand to support an additional airport. However, airport expansion proposals do not consider the effects of a potential new high speed rail line, nor are airport expansions evaluated comparatively with high speed rail or highway expansion proposals. Similarly, capacity improvements are also planned on I-15 between Los Angeles and Las Vegas, and as with planned airport expansions, highway expansion proposals do not consider the potential effects of either rail or air travel alternatives and are not considered comparatively with such investments. In addition to the individual named above, Andrew Von Ah, Assistant Director; Jay Cherlow; Colin Fallon; Greg Hanna; David Hooper; Delwen Jones; Richard Jorgenson; Catherine Kim; Max Sawicky; Gretchen Snoey; Jason Vassilicos; and Mindi Weisenbloom made key contributions to this report. | Federal and other decision makers have had a renewed interest in how high speed rail might fit into the national transportation system and address increasing mobility constraints on highways and at airports due to congestion. GAO was asked to review (1) the factors affecting the economic viability--meaning whether total social benefits offset or justify total social costs--of high speed rail projects, including difficulties in determining the economic viability of proposed projects; (2) the challenges in developing and financing high speed rail systems; and (3) the federal role in the potential development of U.S. high speed rail systems. GAO reviewed federal legislation; interviewed federal, state, local, and private sector officials, as well as U.S. project sponsors; and reviewed high speed rail development in France, Japan, and Spain. Factors affecting the economic viability of high speed rail lines include the level of expected riders, costs, and public benefits (i.e., benefits to non-riders and the nation as a whole from such things as reduced congestion), which are influenced by a line's corridor and service characteristics. High speed rail tends to attract riders in dense, highly populated corridors, especially where there is congestion on existing transportation modes. Costs largely hinge on the availability of rail right-of-way and on a corridor's terrain. To stay within financial or other constraints, project sponsors typically make trade-offs between cost and service characteristics. While some U.S. corridors have characteristics that suggest economic viability, uncertainty associated with rider and cost estimates and the valuation of public benefits makes it difficult to make such determinations on individual proposals. Research on rider and cost forecasts has shown they are often optimistic, and the extent that U.S. sponsors quantify and value public benefits varies. Once projects are deemed economically viable, project sponsors face the challenging tasks of securing the up-front investment for construction costs and sustaining public and political support and stakeholder consensus. In the three countries GAO visited, the central government generally funded the majority of the up-front costs of high speed rail lines. By contrast, federal funding for high speed rail has been derived from general revenues, not from trust funds or other dedicated funding sources. Consequently, high speed rail projects must compete with other nontransportation demands on federal funds (e.g., national defense or health care) as opposed to being compared with other alternative transportation investments in a corridor. Available federal loan programs can support only a fraction of potential high speed rail project costs. Without substantial public sector commitment, private sector participation is difficult to secure. The challenge of sustaining public support and stakeholder consensus is compounded by long project lead times, by numerous stakeholders, and by the absence of an established institutional framework. The recently enacted Passenger Rail Investment and Improvement Act of 2008 will likely increase the federal role in the development of high speed rail, as will the newly enacted American Recovery and Reinvestment Act of 2009. In the United States, federal involvement with high speed rail to date has been limited. The national rail plan required by the Passenger Rail Investment and Improvement Act of 2008 is an opportunity to identify the vision and goals for U.S. high speed rail and how it fits into the national transportation system, an exercise that has largely remained incomplete. Accountability can be enhanced by tying the specific, measurable goals required by the act to performance and accountability measures. In developing analytical tools to apply to the act's project selection criteria, it will be important to address optimistic rider and cost forecasts and varied public benefits analyses. |
Shortages of chemical and biological defense equipment are a long-standing problem. After the Persian Gulf Conflict, the Army changed its regulations in an attempt to ensure that early-deploying units would have sufficient equipment on hand upon deployment. This direction, contained in U.S. Forces Command Regulation 700-2, has not been universally implemented. Neither the Army’s more than five active divisions composing the crisis response force nor the early-deploying Army reserve units we visited had complied with the new stocking level requirements. All had shortages of critical equipment; three of the more than five active divisions had 50 percent or greater shortages of protective suits, and shortages of other critical items were as high as 84 percent, depending on the unit and the item. This equipment is normally procured with operation and maintenance funds. These shortages occurred primarily because unit commanders consistently diverted operation and maintenance funds to meet what they considered higher priority requirements, such as base operating costs, quality-of-life considerations, and costs associated with other-than-war deployments such as those to Haiti and Somalia. Relative to the DOD budget, the cost of purchasing this protective equipment is low. Early-deploying active divisions in the continental United States could meet current stocking requirements for an additional cost of about $15 million. However, unless funds are specifically designated for chemical and biological defense equipment, we do not believe unit commanders will spend operation and maintenance funds for this purpose. The shortages of on-hand stock are exacerbated by inadequate installation warehouse space for equipment storage, poor inventorying and reordering techniques, shelf-life limitations, and difficulty in maintaining appropriate protective clothing sizes. The Army is presently considering several actions to improve these conditions. New and improved equipment for chemical and biological defense is needed to overcome some shortfalls, and DOD is having difficulty meeting all of its planned chemical and biological defense research goals. Efforts to improve the management of the materiel development and acquisition process have so far had limited results and will not attain their full effect until at least fiscal year 1998. In response to lessons learned in the Persian Gulf Conflict, Congress directed DOD to improve the coordination of chemical and biological doctrine, requirements, research, development, and acquisition among DOD and the military services. DOD has acted. During 1994 and 1995, it established the Joint Service Integration Group to prioritize chemical and biological defense research efforts and develop a modernization plan; and the Joint Service Materiel Group to develop research, development, acquisition, and logistics support plans. The activities of these two groups are overseen by a single DOD office—the Assistant Secretary of Defense for Nuclear, Biological, and Chemical Warfare Defense. While these groups have begun to implement the congressional requirements of P.L. 103-160, progress has been slower than expected. At the time of our review, the Joint Service Integration Group expected to produce during 1996 its proposed (1) list of chemical and biological defense research priorities and (2) joint service modernization plan and operational strategy. The Joint Service Materiel Group expects to deliver its proposed plan to guide chemical and biological defense research, development, and acquisition in October 1996. Consolidated research and modernization plans are important for avoiding duplication among the services and otherwise achieving the most effective use of limited resources. It is unclear whether or when DOD will approve these plans. However, DOD officials acknowledged that it will be fiscal year 1998 at the earliest, about 5 years after the law was passed, before DOD can begin formal budgetary implementation of these plans. DOD officials told us progress by these groups has been adversely affected by personnel shortages and collateral duties assigned to the staff. DOD efforts to field specific equipment and conduct research to address chemical and biological defense deficiencies have produced mixed results. On the positive side, DOD began to field the Biological Integrated Detection System in January 1996 and expects to complete the initial purchase of 38 systems by September 1996. However, DOD has not succeeded in fielding other needed equipment and systems designed to address critical battlefield deficiencies identified during the Persian Gulf Conflict and earlier. For example, work initiated in 1978 to develop an Automatic Chemical Agent Alarm to provide visual, audio, and command-communicated warnings of chemical agents remains incomplete. Due to budget constraints, DOD has approved and acquired only 103 of the more than 200 FOX mobile reconnaissance systems originally planned. Of the 11 chemical and biological defense research goals listed in DOD’s 1995 Annual Report to the Congress, DOD met 5 by their expected completion date of January 1996. Some were not met. For example, a DOD attempt to develop a less corrosive and labor-intensive decontaminate solution is now not expected to be completed until 2002. Chemical and biological defense training at all levels has been a constant problem for many years. For example, in 1986, DOD studies found that its forces were inadequately trained to conduct critical tasks. It took 6 months during the Persian Gulf Conflict to prepare forces in theater to defend against chemical and biological agents. However, these skills declined again after this conflict. A 1993 Army Chemical School study found that a combined arms force of infantry, artillery, and support units would have extreme difficulty performing its mission and suffer needless casualties if forced to operate in a chemical or biological environment because the force was only marginally trained. Army studies conducted from 1991 to 1995 showed serious weaknesses at all levels in chemical and biological defense skills. Our analysis of Army readiness evaluations, trend data, and lessons learned reports from this period also showed individuals, units, and commanders alike had problems performing basic tasks critical to surviving and operating in a chemical or biological environment. Despite DOD efforts—such as doctrinal changes and command directives—designed to improve training in defense against chemical and biological warfare since the Persian Gulf Conflict, U.S. forces continue to experience serious weaknesses in (1) donning protective masks, (2) deploying detection equipment, (3) providing medical care, (4) planning for the evacuation of casualties, and (5) including chemical and biological issues in operational plans. The Marine Corps also continues to experience similar problems. In addition to individual service training problems, the ability of joint forces to operate in a contaminated environment is questionable. In 1995, only 10 percent of the joint exercises conducted by four major CINCs included training to defend against chemical and biological agents. None of this training included all 23 required chemical/biological training tasks, and the majority included less than half of these tasks. Furthermore, these CINCs plan to include chemical/biological training in only 15 percent of the joint exercises for 1996. This clearly demonstrates the lack of chemical and biological warfare training at the joint service level. There are two fundamental reasons for this. First, CINCs generally consider chemical and biological training and preparedness to be the responsibility of the individual services. Second, CINCs believe that chemical and biological defense training is a low priority relative to their other needs. We examined the ability of U.S. Army medical units that support early-deploying Army divisions to treat casualties in a chemically and biologically contaminated environment. We found that these units often lacked needed equipment and training. Had Iraq actually employed chemical and/or biological agents during the Persian Gulf Conflict, the military’s ability to deal with subsequent casualties would have been severely impaired at best. Medical units supporting early-deploying Army divisions we visited often lacked critical equipment needed to treat casualties in a chemically or biologically contaminated environment. For example, these units had only about 50 to 60 percent of their authorized patient treatment and decontamination kits. Some of the patient treatment kits on hand were missing critical items such as drugs used to treat casualties. Also, none of the units had any type of collective shelter in which to treat casualties in a contaminated environment. Army officials acknowledged that the inability to provide treatment in the forward area of battle would result in greater rates of injury and death. Old versions of collective shelters are unsuitable, unserviceable, and no longer in use; new shelters are not expected to be available until fiscal year 1997 at the earliest. Few Army physicians in the units we visited had received formal training on chemical and biological patient treatment beyond that provided by the Basic Medical Officer course. Further instruction on chemical and biological patient treatment is provided by the medical advanced course and the chemical and biological casualty management course. The latter course provides 6-1/2 days of classroom and field instruction needed to save lives, minimize injury, and conserve fighting strength in a chemical or biological warfare environment. During the Persian Gulf Conflict, this course was provided on an emergency basis to medical units already deployed to the Gulf. In 1995, 47 to 81 percent of Army physicians assigned to early-deploying units had not attended the medical advanced course, and 70 to 97 percent had not attended the casualty management course. Both the advanced and casualty management courses are optional, and according to Army medical officials, peacetime demands to provide care to service members and their dependents often prevented attendance. Also, the Army does not monitor those who attend the casualty management course, nor does it target this course toward those who need it most, such as those assigned to early-deploying units. Today, DOD still has inadequate stocks of vaccines for known threat agents, and so far has chosen not to implement existing immunization policy and procedures. DOD’s program to vaccinate U.S. forces to protect them against biological agents will not be fully effective until these problems are resolved. Though DOD has identified which biological agents are critical threats and determined the amount of vaccines that should be stocked, we found that the amount of vaccines stocked remains insufficient to protect U.S. forces, as it was during the Persian Gulf Conflict. Problems also exist with regard to the vaccines available to DOD. Only a few biological agent vaccines have been approved by the Food and Drug Administration (FDA). Many remain in Investigational New Drug (IND) status. Although IND vaccines have long been safely administered to personnel working in DOD vaccine research and development programs, the FDA usually requires large-scale field trials in humans to demonstrate new drug safety and effectiveness before approval. DOD has not performed such field trials due to ethical and legal considerations. DOD officials said that they hoped to acquire a prime contractor during 1996 to subcontract vaccine production and do what is needed to obtain FDA approval of vaccines currently under investigation. Since the Persian Gulf Conflict, DOD has consolidated the funding and management of several biological warfare defense activities, including vaccines, under the new Joint Program Office for Biological Defense. A 1993 DOD Directive established the policy, procedures, and responsibilities for stockpiling biological agent vaccines and inoculating service members assigned to high-threat areas or to early-deploying units before deployment. The JCS and other high-ranking DOD officials have not yet approved implementation of this immunization policy. The draft policy implementation plan is completed and is currently under review within DOD. However, this issue is highly controversial within DOD, and whether the implementation plan will be approved and carried out is unclear. Until that happens, service members in high-threat areas or designated for early deployment in a crisis will not be protected by approved vaccines against biological agents. The primary cause for the deficiencies in chemical and biological defense preparedness is a lack of emphasis up and down the line of command in DOD. In the final analysis, it is a matter of commanders’ military judgment to decide the relative significance of risks and to apply resources to counter those risks that the commander finds most compelling. DOD has decided to concentrate on other priorities and consequently to accept a greater risk regarding preparedness for operations on a contaminated battlefield. Chemical and biological defense funding allocations are being targeted by the Joint Staff and DOD for reduction in attempts to fund other, higher priority programs. DOD allocates less than 1 percent of its total budget to chemical and biological defense. Annual funding for this area has decreased by over 30 percent in constant dollars since fiscal year 1992, from approximately $750 million in that fiscal year to $504 million in 1995. This reduction has occurred in spite of the current U.S. intelligence assessment that the chemical and biological warfare threat to U.S. forces is increasing and the importance of defending against the use of such agents in the changing worldwide military environment. Funding could decrease even further. On October 26, 1995, the Joint Requirements Oversight Council and the JCS Chairman proposed to the Office of the Secretary of Defense (OSD) a cut of $200 million for each of the next 5 years ($1 billion total) to the counterproliferation budget. The counterproliferation program element in the DOD budget includes funding for the joint nuclear, chemical, and biological defense program as well as vaccine procurement and other related counterproliferation support activities. If implemented, this cut would severely impair planned chemical and biological defense research and development efforts and reverse the progress that has been made in several areas, according to DOD sources. A final $800 million cut over 5 years was recommended to the Secretary of Defense. On March 7, 1996, we were told that DOD was now considering a proposed funding reduction of $33 million. In January 1996, the Deputy Secretary of Defense requested a DOD Program Analysis and Evaluation study on counterproliferation support programs. The study is expected to be completed by the end of June 1996. The battle staff chemical officer/chemical noncommissioned officers are a commander’s principal trainers and advisers on chemical and biological defense operations and equipment operations and maintenance. We found that chemical and biological officer staff positions are being eliminated and that when filled, staff officers occupying the position are frequently assigned collateral tasks that reduce the time available to manage chemical and biological defense activities. At U.S. Army Forces Command and U.S. Army III Corps headquarters, for example, chemical staff positions are being reduced. Also, DOD officials told us that the Joint Service Integration and Joint Service Materiel Groups have made limited progress largely because not enough personnel are assigned to them and collateral duties are assigned to the staff. We also found that chemical officers assigned to a CINC’s staff were frequently tasked with duties not related to chemical and biological defense. The lower emphasis given to chemical and biological matters is also demonstrated by weaknesses in the methods used to monitor their status. DOD’s current system for reporting readiness to the Joint Staff is the Status of Resources and Training System (SORTS). We found that the effectiveness of SORTS for evaluating unit chemical and biological defense readiness is limited largely because (1) it allows commanders to be subjective in their evaluations, (2) it allows commanders to determine for themselves which equipment is critical, and (3) reporting remains optional at the division level. We also found that after-action and lessons learned reports and operational readiness evaluations were limited in their effectiveness for accurately assessing unit chemical and biological defense status. At the U.S. Army Reserve Command there is no chemical or biological defense staff position. Consequently, the U.S. Army Reserve Command does not effectively monitor the chemical and biological defense status of reserve forces. The priority given to chemical and biological defense varied widely. Most CINCs assign chemical and biological defense a lower priority than other threats. Even though the Joint Staff has tasked CINCs to ensure that their forces are trained in certain joint chemical and biological defense tasks, the CINCs we visited considered such training a service responsibility. Several DOD officials said that U.S. forces still face a generally limited, although increasing, threat of chemical and biological warfare. At Army corps, division, and unit levels, the priority given to this area depended on the commander’s opinion of its relative importance. At one early-deploying division we visited, the commander had an aggressive system for chemical and biological training, monitoring, and reporting. At another, the commander had made a conscious decision to emphasize other areas, such as other-than-war deployments and quality-of-life considerations. As this unit was increasingly being asked to conduct operations other than war, the commander’s emphasis on the chemical and biological warfare threat declined. Officials at all levels said training in chemical and biological preparedness was not emphasized because of higher priority taskings, low levels of interest by higher headquarters, difficulty working in cumbersome and uncomfortable protective clothing and masks, the time-consuming nature of the training, and a heavy reliance on post-mobilization training and preparation. We have no means to determine whether increased emphasis on chemical and biological warfare defense is warranted at the expense of other priorities. This is a matter of military judgment by DOD and of funding priorities by DOD and Congress. However, in view of the increasing chemical and biological threat and the continuing U.S. chemical and biological defense weaknesses identified in our report, we recommended that the Secretary of Defense reevaluate the priority and emphasis given this area throughout DOD. We further recommended that if the Secretary’s reevaluation determines that more emphasis is needed, the Secretary should consider (1) elevating the single office responsible for program oversight to the Assistant Secretary level rather than leaving it in its current position as part of the Office of the Assistant Secretary for Nuclear, Biological, and Chemical Warfare Defense and (2) adopting more of a single manager approach for executing the chemical and biological defense program. We made eight other recommendations concerning opportunities to improve the effectiveness of existing DOD chemical and biological activities. DOD, in its official response to our report, generally agreed with our findings and concurred with 9 of our 10 recommendations. We would be pleased to respond to any questions you may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed the capability of U.S. forces to fight and survive chemical and biological warfare. GAO noted that: (1) none of the Army's crisis-response or early-deployment units have complied with requirements for stocking equipment critical for fighting under chemical or biological warfare; (2) the Department of Defense (DOD) has established two joint service groups to prioritize chemical and biological defense research efforts, develop a modernization plan, and develop support plans; (3) although DOD has begun to field a biological agent detection system, it has not successfully fielded other needed equipment and systems to address critical battlefield deficiencies; (4) ground forces are inadequately trained to conduct critical tasks related to biological and chemical warfare, and there are serious weaknesses at all levels in chemical and biological defense skills; (5) medical units often lack the equipment and training needed to treat casualties resulting from chemical or biological contamination; (6) DOD has inadequate stocks of vaccines for known threat agents and not implemented the immunization policy established in 1993; and (7) the primary cause of these deficiencies is a lack of emphasis along the DOD command chain, with DOD focusing its efforts and resources on other priorities. |
CBP is the lead federal agency charged with interdicting terrorists, criminals, and inadmissible travelers at ports of entry while facilitating the flow of legitimate travel and commerce at the nation’s borders. In March 2003, inspectors from the three legacy agencies—the Department of Justice’s U.S. Immigration and Naturalization Service, the Department of the Treasury’s U.S. Customs Service and the Department of Agriculture’s Animal and Plant Health Inspection Service—were merged to form CBP. As part of the merger, CBP cross-trained CBP officers to simultaneously perform immigration and customs inspection functions as well as identify and refer possible agricultural violations for further inspection. DHS stated that the ability to use inspectors interchangeably for immigration and customs inspection functions would allow the agency to more effectively use its personnel and accelerate the processing of legitimate travelers, thereby enabling CBP to more effectively enhance efforts to secure the border. OFO—one of the CBP component offices—manages and deploys CBP officers who operate within 20 field offices, and 329 ports of entry composed of airports, seaports, and designated land ports of entry throughout the United States plus selected locations overseas. As of July 2011, nearly 20,000 CBP officers operated at U.S. ports of entry and other locations overseas. The total number of onboard CBP officers peaked in fiscal year 2009 at 21,339 but declined in fiscal years 2010 and 2011 to 20,431. According to OFO, the decline in the onboard number of CBP officers is due, in part, to a decline in traveler volume resulting in a decline in collected user fees that fund CBP officers located at airports and seaports. At the end of fiscal year 2004, there were about 18,000 CBP officers, the majority of whom were legacy officers from the Department of Treasury’s U.S. Customs Service, followed by legacy officers from the Department of Justice’s U.S. Immigration and Naturalization Service and the Department of Agriculture’s Animal and Plant Health Inspection Service. Since fiscal year 2007, the total number of legacy officers has declined. As of July 2011, 45 percent of the CBP officer workforce was comprised of legacy officers. Since fiscal year 2007, the annual attrition rate of legacy officers has declined from 6.5 percent to 2.4 percent in fiscal year 2010. Figure 1 illustrates the percentage of legacy CBP officers compared to the total CBP officer workforce over time. CBP officer responsibilities for passenger inspection are primarily focused at the primary and secondary inspection areas at ports of entry. In the primary inspection area, CBP officers are expected to rapidly analyze passenger admissibility by sufficiently questioning the passenger, examining the passenger’s travel documents, and using appropriate technology to identify those passengers that can be immediately admitted into the United States or need to be referred to a secondary inspection area for a more thorough inspection, if necessary. Specifically, CBP officers in the primary inspection area are expected to first examine travel documents by comparing the document to the passenger and then ask questions to confirm the identity of the traveler. They also may inspect travelers’ luggage. CBP officers who serve in the secondary inspection area conduct closer inspection of travel documents and possessions and can use multiple law enforcement databases to verify the traveler’s identity, background, and purpose for entering the country. CBP officers may also serve in specialized teams to support the inspection functions at the ports of entry. For example, CBP established the Passenger Analysis Unit team, which is responsible for cross-checking passenger data in automated systems to identify high-risk passengers before they enter the country. Appendix I provides more detail on CBP officer staffing policy and OFO specialized teams. OFO is to coordinate with OTD to ensure that component training complies with OTD training standards. In the case of CBP officers, OFO and OTD share responsibility for ensuring that newly hired and incumbent CBP officers are sufficiently trained. OTD is responsible for designing, developing, delivering, and evaluating CBP-wide training courses and establishing training standards and policies for the program, while OFO is responsible for identifying the training requirements of CBP officers, providing subject-matter experts to assist in the development and instruction of some training courses, and reviewing training that is developed. OFO established a training branch in 2003 to serve as a liaison between OTD and OFO. OFO also established FDAU in 2005, which performs analyses of fraudulent documents that have been seized to identify global patterns and trends. FDAU is to provide training and training materials to enhance CBP officers’ abilities to detect fraudulent documents and thereby increase the number of interceptions through the sharing of information within CBP and DHS and with other U.S. and foreign government agencies. OTD is also responsible for overseeing and managing the CBP training budget, known as the National Training Plan (NTP), and prioritizing training development and delivery via the Training Advisory Board (TAB). Also, OTD developed and manages the Virtual Learning Center (VLC) where CBP officers can take self-paced courses on a variety of topics. Further, OTD operates and manages basic and advanced training schools for CBP officers. Specifically, OTD operates the Field Operations Academy, which trains and prepares newly hired CBP Officers for deployment to U.S. ports of entry. OTD is responsible for managing and overseeing CBP’s official training records system, Training Records and Enrollment Network (TRAEN), and the Academy Course Management System, a training scheduling and tracking system that OFO uses to monitor newly hired CBP officers’ successful completion of basic training. Each year, OTD is to request that the CBP offices provide a list of specific training courses and the approximate number of participants they would like to send to training. OTD is to compile these requests and present them to TAB, which is to review and prioritize the courses to be delivered that year. In addition, TAB approves the total NTP budget amount for each fiscal year. On the basis of the Board’s priorities, OTD is to develop the NTP budget for the fiscal year and is also responsible for monitoring the delivery of the training and managing the NTP budget during the year. The NTP budget funds the delivery of training for all CBP offices—including training for OFO—for a single fiscal year. In fiscal year 2009, CBP training expenditures peaked due, in part, to receipt of supplemental funding to hire and train CBP officers and Border Patrol agents. Since fiscal year 2009, CBP’s NTP budget expenditures have declined due to increasing budget constraints. Figure 2 displays CBP’s actual NTP budget expenditures from fiscal years 2008 through 2010 and its projected end-of-year expenditures for fiscal year 2011. OFO has also funded the development and delivery of its own training courses for CBP officers when they have not received funding from the NTP. IA has oversight authority for all aspects of CBP operations, personnel, and facilities. IA is responsible for ensuring compliance with all CBP-wide programs and policies and operates a covert test program to ensure compliance. Following the issuance of the results of our covert tests of border security in May 2008, CBP initiated covert tests to evaluate CBP’s capabilities to detect document fraud. Specifically, CBP focused its tests on evaluating CBP’s detection of impostors, or individuals who attempt to enter the United States fraudulently by using a genuine, unaltered travel document that belongs to another person. CBP also continued covert tests to detect cargo containing illicit radioactive material, among others.potential training needs. OFO uses these ongoing test results to identify All newly hired CBP officers are required to complete a basic training program and demonstrate proficiency in CBP officer duties. Incumbent CBP officers are required to take mandatory courses such as information technology security, occupational safety, and human trafficking awareness, among others. CBP provides most mandatory courses on a one-time or annual basis via the VLC. OFO has mandatory course requirements, such as fraudulent document detection, and has also developed specialized courses for incumbent officers assigned to specialized teams. However, CBP does not require that all CBP officers assigned to specialized teams complete the specialized training developed for that team. According to OFO, management must balance the operational needs of the port with the availability of the training. Appendix II lists examples of mandatory and specialized courses for CBP officers for fiscal year 2011. As we previously reported, in 2003, CBP initiated a multiyear cross- training program to equip new and legacy officers with the tools necessary to perform primary immigration and customs inspections, and sufficient knowledge to identify agricultural inspections in need of further examination.courses—covering immigration fundamentals, immigration law, and agriculture fundamentals—regardless of where they were assigned. All legacy immigration officers were required to complete three courses— customs fundamentals, customs law, and agriculture fundamentals— regardless of where they were assigned. Further, based on their assignment, legacy officers were required to complete additional courses specific to their assignment and port environment. In June 2011, CBP officially retired the cross-training courses and replaced them with revised modules. OFO instructed managers, supervisors, and training officers to use these new materials as refresher training for officers who transfer to a CBP required all legacy customs officers complete three new assignment or environment or who return to inspection duties after an extended absence. CBP has prepared training development standards for all CBP training programs and courses to ensure that training delivered to CBP employees meets established quality standards of instruction and evaluation. OTD standards are based, in part, on federal laws and regulations, which require agencies to establish training programs that support their mission and meet specified standards, including identifying training needs, prioritizing these needs, and evaluating the results of training programs and plans. Also, CBP develops and revises its basic training for new CBP officers to meet Federal Law Enforcement Training Accreditation (FLETA) standards, which provide law enforcement agencies with an opportunity to voluntarily demonstrate that they meet an established set of professional standards and receive appropriate recognition. Finally, Standards for Internal Control in the Federal Government provide criteria for the management and oversight of agency operations, including training programs. In 2009, CBP revised its training program for new officers in accordance with its training development standards. These standards are based on legal standards that guide the development of training in the federal government and standards that guide federal law enforcement training. OTD standards also contain specific guidance related to the following phases: (1) planning, (2) analysis, (3) design, (4) development, (5) evaluation, and (6) delivery of the course curriculum, which CBP adhered to in revising its training program. Table 1 provides an overview of the OTD training development phases and related standards and our assessment of how CBP efforts met these standards in revising its training for newly hired officers. OTD standards state that the training curriculum should be current, valid, and updated once every 3 to 5 years. CBP began the process of revising its training for new officers in 2009 after the initial launch of the CBP Officer curriculum in 2004, consistent with OTD standards for updating the training curriculum every 5 years. OTD, as well as other federal law enforcement training standards, state that programs should first identify the critical tasks that the individual is expected to perform in order to determine what training is needed. Consistent with these standards, OTD convened a team of subject matter experts (SMEs) to identify and rank the tasks that new CBP officers are expected to perform. The team identified a total of 138 critical tasks that newly hired CBP officers are expected to perform within the first 2 years of employment. These included conducting thorough and accurate research to support inspections and investigations, and preparing thorough and accurate reports covering significant incidents and intelligence. Once the tasks were identified, the panel of SMEs compared the identified tasks to the tasks addressed in the existing curriculum to identify any skill gaps. OTD then developed specific courses with appropriate lessons and topics to ensure that these tasks were addressed in the new curriculum. For example, new modules on evidence preservation and secondary report writing were incorporated in the revised curriculum to address identified officer skill gaps in handling evidence and writing. Consistent with the OTD standard that requires a test run of a complete and approved course in a controlled environment by selected individuals representing the course’s learning audience, the new CBP officer curriculum was piloted to test its content and delivery prior to its launch in February 2011. As a result, the new officer training program course was expanded from about 15 to 18 weeks and approximately 30 to 35 percent of the new officer curriculum is new or updated and expanded. Thus, the new officer training program complies with OTD’s standards that state the training curriculum should be current and valid. OTD internal training standards also state that the training should be aligned with the current agency mission and current threats. According to OFO and OTD officials, the previous CBP officer curriculum focused primarily on preparing the officer to serve in the primary inspection function at a port. The SME panel recommended that the new officer curriculum be revised to produce a law enforcement officer capable of supporting CBP’s expanding antiterror mission. As a result, the new curriculum is designed to produce a professional law enforcement officer capable of protecting the homeland from terrorist, criminal, biological, and agricultural threats. Specifically, the new curriculum states that the CBP officer is expected, among other tasks, to draw appropriate conclusions and take appropriate action to identify behavioral indicators displayed by criminals and terrorists, effectively interview and analyze travelers to identify potential threats, expertly identify altered and counterfeit documents and impostors, and use technology in support of the inspection process.officer is expected to be able to perform the primary inspection function, as well as some aspects of the secondary inspection function. Upon completion of training, the newly hired CBP OTD standards also state that is important to identify the appropriate delivery method and location. In accordance with these standards, CBP determined that the training for new CBP officers would be divided into three components as shown in figure 3. Pre-academy—According to OFO officials, the pre-academy component helps educate incoming CBP officers of job responsibilities before the agency commits the funds to send them to the Field Operations Academy in Glynco, Georgia, for basic training. OFO officials also stated that the pre-academy curriculum is structured because it recommends that a fixed curriculum be completed in a specific amount of time. Also, it contains a mix of classroom instruction and web-based courses to familiarize incoming CBP officers with the specific requirements of the law enforcement and inspections job, thereby helping to ensure that the pre- academy training is consistent throughout the nation. Basic academy—The SME panel recommended that the curriculum include intensified training to enhance officer vigilance and awareness though interview training, behavior analysis training to discern passenger behavior, report writing training, and training to detect fraudulent documents, among others. In addition, CBP increased the amount of time devoted to practical exercises in response to comments made by newly hired officers during pilot testing. For example, exercises in the passenger processing module increased from 11 hours in the old curriculum to 33 hours in the revised curriculum. The revised curriculum was designed to enhance an officer’s ability to effectively interview and analyze travelers to identify potential threats; expertly identify altered and counterfeit documents and impostors; identify behavioral indicators displayed by terrorists and criminals; use technology (including computers and other resources) in support of the inspection process. Postacademy—In 2007, we reported that although CBP had issued guidance for on-the-job training of new CBP officers, CBP had difficulty in providing the training in accordance with the guidance. We recommended that CBP incorporate the following into its on-the-job training program: (1) specific tasks that CBP officers must experience during on-the-job training and (2) requirements for measuring officer proficiency in performing those tasks. In response to our recommendations, CBP revised its postacademy training program by identifying specific tasks and developing a plan for measuring officer proficiency in those tasks. The revised postacademy training program combines classroom and on-the-job training and incorporates ongoing testing and evaluation of officer proficiency. The evaluations are tied to the critical tasks and competencies that a new officer must perform. In accordance with the new postacademy training program, prior to being able to perform primary inspections independently, a training officer must certify that the officer is proficient to perform the task. The revised postacademy training began in June 2011. Consistent with OTD training standards that call for measuring the effectiveness of training, CBP plans to ask new officers to evaluate their basic academy and postacademy training. In addition, CBP plans to survey both new officers and their supervisors several months after a new officer completes on-the-job training to determine the effectiveness of the training. Consistent with OTD standards that its training meets federal law enforcement standards, OTD officials stated that the curriculum received its federal law enforcement training accreditation in November 2011. CBP has taken steps to identify the training needs of its incumbent officers, by for example, conducting covert tests to assess vulnerabilities and systemic weaknesses at ports of entry and identifying possible officer training needs, but could do more to analyze the tests’ results. In response to its covert tests, CBP has delivered two required training courses for incumbent officers, but it has not evaluated the effectiveness of these courses. Also, OFO officials stated that supervisors identify CBP officer training needs. However, CBP faces challenges in establishing policies and procedures to guide its component offices’ efforts to implement and oversee training, and ensuring that it has reliable training data. Moreover, CBP has not conducted an analysis of possible skill gaps that may exist between identified critical skills all incumbent officers should possess and incumbent officers’ current skills. To identify vulnerabilities and weaknesses at U.S. ports of entry, CBP IA conducts covert tests in which undercover inspectors attempt to enter the United States with genuine documents used fraudulently. The tests are designed to provide a snapshot of the level of a port’s performance related to the testing objectives on a particular day. We examined CBP’s results of covert tests conducted over more than 2 years and found significant weaknesses in the CBP inspection process at the ports of entry that were tested.all ports, OFO officials stated that the tests are useful to identify possible Although the results are not fully generalizable to weaknesses and vulnerabilities. Following each covert test, IA prepared a written post-test summary of the tests and outcomes, debriefed with senior port management and headquarters officials, and provided data to OFO on the test outcomes. In some of the summaries, IA inspectors identified what they observed to be the key factors that contributed to successful outcomes, as well as potential vulnerabilities. In response to initial test results, OFO developed and mandated an updated annual fraudulent document course in August 2009. Also, in March 2010, OFO developed and mandated a “Back to Basics” course that emphasized the basic inspection duties that all CBP officers are required to perform during a primary inspection. In July 2011, OFO began implementing a follow-on course which includes more specific instruction. CBP administers postcourse evaluations to CBP officer trainees to obtain their feedback on the “Back to Basics” course but does not have plans to fully evaluate the effectiveness of this course by checking the extent to which the officers have retained the information over time. OTD officials stated that they conduct these types of evaluations for the newly hired CBP officer basic training but do not do so for this one for incumbent officers, due to time and cost constraints. However, we have previously reported that agencies should assess the extent to which training and development efforts contribute to improved performance and results to help ensure that the agency is not devoting resources to training that may An evaluation of the impact of these training courses on be ineffective.CBP officer performance could help CBP know the extent to which such training is a sufficient response to the covert test results or whether adjustments to the training or other management actions are needed. CBP has not conducted an analysis of all the possible causes or systemic issues that may be contributing to the test results. The protocols for covert tests state that IA will provide a comprehensive report at the conclusion of all cover tests that will summarize test results and identify systemic issues. As of August 2011, neither IA nor OFO have conducted such an analysis due to staffing and time constraints, according to IA and OFO officials. However, this type of analysis would help CBP identify any patterns or trends that indicate the extent to which CBP officer training, performance, or other systemic issues may contribute to the issues identified in the covert tests. Without a comprehensive assessment, it is difficult for CBP to identify the systemic issues underlying the test results. In December 2008, CBP issued a directive assigning general roles and responsibilities for training to OTD and other CBP offices, such as identifying OTD as the centralized leader for all CBP training. However, CBP has not established policies or procedures to guide component offices’ efforts to implement and oversee training. Figure 4 illustrates the key offices and positions that OFO identified as responsible for incumbent CBP officer training. OFO does not have a policy that specifies the roles and responsibilities of each of these offices and positions for training implementation and oversight. Federal regulations require that agencies establish policies governing employee training, including a statement of the alignment of employee training and development with agency strategic plans, the assignment of responsibility to ensure the training goals are achieved, and the delegation of training approval authority to the lowest appropriate level. In addition, internal control standards state, in a good control environment, areas of authority and responsibility are clearly defined and appropriate lines of reporting are established. Internal control standards also require that responsibilities be communicated within an organization. According to OFO, the OFO Programs and Policy branch is responsible for overseeing and coordinating the development of policy to govern incumbent officer training, including policy that assigns roles and responsibilities. According to the OFO officials, a policy would be useful because it helps clearly define the responsibilities of all offices involved in incumbent CBP officer training. Specifically, a policy outlining the roles and responsibilities of offices and positions for training would help clarify which offices and positions are responsible for ensuring incumbent officer training needs are identified and addressed. However, the acting branch chief of the OFO Programs and Policy branch stated staffing constraints have limited the branch’s ability to initiate the process of developing a policy that clearly assigns responsibility for all offices involved in CBP officer training. According to OFO officials, supervisors are responsible for identifying officer training needs and requesting training to meet these needs. For example, in June 2011, CBP instructed supervisors to identify training needs and use the post academy modules to address those needs. However, OFO could not provide a policy document outlining how supervisors would identify training needs and coordinate training. Also, according to the OFO officials, port management and field office directors are responsible for ensuring that CBP officers complete mandatory and other training related to their job duties. However, OFO officials in headquarters and at the ports stated that no policy exists that assigns these responsibilities to port management or field offices. In addition, senior CBP officials stated that Field Training Officers help ensure that CBP officers are receiving the training they need to perform their assigned duties, and that internal measures are in place to assess training needs and accomplishments nationwide. However, officials from the OFO Training branch stated that Field Training Officers are assigned to help deliver training to the ports but are not required to oversee the completion of required training by CBP officers at their respective ports. Further, OFO could not provide documentation confirming the roles and responsibilities of the Field Training Officer. A policy outlining the roles and responsibilities of offices and positions for training could help eliminate such confusion and clarify which offices and positions are responsible for identifying and addressing training needs and for holding these offices and individuals accountable for their responsibilities. CBP currently lacks reliable training completion records to ensure CBP officers received required training or other training relevant to their assigned duties. According to OTD and OFO officials, the training completion records maintained in TRAEN, CBP’s official record of training, are incomplete or contain inaccurate information, such as the dates of training completion. As a result, officials from OTD’s Office of Operations, which plans and manages the annual National Training Plan (NTP) budget, stated they developed their own records of training completions that consist of TRAEN records supplemented with data they gather from e-mail archives. Also, officials from two of the three ports we visited stated they rely on locally developed databases or data sources other than TRAEN to track CBP officers’ training records. We found, based on our analysis of TRAEN records, more than 4,000 onboard legacy customs officers have not completed the immigration fundamentals, immigration law, and agricultural fundamentals courses although they were required to complete them during the cross-training program. According to OFO officials, the training completion records maintained in TRAEN are incomplete, and it is unlikely that legacy officers did not complete required cross-training. Nevertheless, without reliable training records, CBP cannot provide reasonable assurance that all legacy customs officers completed required cross-training courses. OTD stated that CBP offices are responsible for recording their employees’ training records in TRAEN. However, CBP does not have a policy that assigns the responsibility for entering records to its offices or that assigns oversight responsibility to port management to ensure that their staff enter data into TRAEN completely and accurately. CBP is currently in the process of transferring the TRAEN system to the VLC and training CBP officials on how to properly enter training records in the new system. However, OFO and OTD officials stated that even trained employees sometimes do not enter training records completely or in a timely manner. Internal control standards state control activities—such as policies, procedures and management supervision—help to ensure that all transactions are completely and accurately recorded. Further, having reliable data could enable agency managers to compare actual performance to planned or expected results throughout the organization and analyze differences. Moreover, having reliable data to measure the degree to which CBP officers have completed required or recommended training for their assigned positions would put CBP in a better position to gauge the results of its cross-training program and other CBP officer training and measure its progress towards achieving CBP officer training goals. CBP has taken steps to identify training needs among incumbent CBP officers but has not conducted a comprehensive training needs assessment to identify and address potential gaps in incumbent officers’ current skills and competencies. Under executive order and federal regulations, agencies are to review, not less than annually, programs to identify training needs, establish priorities for training, and allocate resources in accordance with those priorities.development standards state that a training needs assessment is needed to identify knowledge or skill gaps and suggest material for new or follow- on training. Specifically, the analysis stage of training development includes conducting a training needs assessment to identify skill or knowledge gaps, conducting a job task analysis to identify critical competencies required for the target audience, and analyzing the target audience to develop appropriate training, among other steps. CBP has taken some steps to identify and analyze incumbent officer training needs. In 2008, CBP initiated the first job task analysis for the CBP officer position since 2003 by identifying nearly 300 job tasks and about 100 competencies that all incumbent CBP officers are expected to perform regardless of their currently assigned duties or port environment. In 2011, CBP also completed a curriculum gap analysis, which compared the newly revised basic academy curriculum with the previous basic academy training to identify new skills or material that incumbent CBP officers may not have learned in their basic academy training. For example, the previous curriculum trained CBP officers to perform 75 critical tasks while the newly revised curriculum trains newly hired officers to perform 138 critical tasks. According to OTD officials, the curriculum gap analysis identified a shift in the training philosophy and delivery methods and some changes in course content. For example, the revised training for newly hired officers aims to instill a law enforcement mindset by adding new courses in weapons training, evidence preservation, and courtroom testimony, among others, and by expanding courses that are designed to increase situational awareness and anti-terrorism vigilance among the CBP officers. It also increases physical conditioning and the number of hours of practical exercises related to conducting primary inspections and examining documents as well as course content to reflect new laws related to immigration processing and operating new equipment. The revised curriculum also adds more training in skills that CBP officers need to perform in secondary processing, including using appropriate computer systems and following procedures to verify passenger admittance. However, CBP has not conducted a training needs assessment or analyzed the target audience to determine what training is needed. For example, it has not evaluated potential gaps between the skills and competencies of current incumbent officers to identify appropriate training needs and possible gaps between (1) the nearly 300 job tasks and 100 competencies that all incumbent officers are expected to perform and (2) any additional skills and knowledge that are currently taught in the revised basic academy curriculum. In addition, CBP has not reviewed incumbent CBP officers’ previous training and experience, including cross-training or any on-the-job training, to identify what training they may have completed. OTD training standards state that it is important to review the previous experience and training to better identify the training needs for particular audiences. In 2007, we recommended that CBP develop data on cross-training programs to determine whether officers received required training so the agency may measure progress toward achieving its training goals. As of September 2011, CBP has not developed these data or measured the extent to which officers completed required cross-training. In June 2011, CBP retired the cross-training courses and replaced the courses with new postacademy modules that contain updated content. However, CBP could review the previous training records of its legacy and other incumbent officers to help identify what training they have completed and to identify which postacademy modules or other training they may need to take to perform their assigned duties. Conducting a comprehensive training needs assessment could help CBP analyze and identify potential skill gaps and training needs for incumbent officers—including legacy officers—and better position it to develop training to meet these needs, thus ensuring its officers are equipped to meet the operational demands at the border. OTD criteria state that CBP training managers may use a variety of techniques during a training needs assessment to gather and analyze information about the necessary training content for proposed training, including: interviews with SMEs; focus groups (moderated group interviews) involving SMEs and representatives of the learning audience; observation of and interviews with those performing a particular job or task in the field; review of course critiques, test results, and performance evaluations; instructional review, course audit, or content review of existing training; and review of field incident reports, critical factors identified, and lessons learned. OFO officials stated that a training needs assessment would be useful, but they have been unable to conduct one due to budget constraints and may not be able to undertake a comprehensive training needs assessment until fiscal year 2013, at the earliest. However, CBP could begin the initial steps of planning for a training needs assessment for incumbent officers in fiscal year 2012. Office of Personnel Management (OPM) guidance states that a training needs assessment should include a plan that sets goals or objectives for the training needs assessment; evaluates the agency readiness and identifies key roles; evaluates prior or other relevant needs assessments; prepares a project plan; and clarifies success measures and program milestones. This plan could be similar to the preparation of a project plan. Specifically, elements of a project plan include (1) establishing clear and achievable training goals; (2) balancing the competing demands for quality, scope, time, and cost; (3) adapting the specifications, plans, and approach to the different concerns and expectations of the various stakeholders involved in the project; and (4) developing milestone dates to identify points throughout the project to reassess efforts under way to determine whether project changes are necessary. OFO officials stated that such a plan could be helpful in initiating the process for conducting a training needs assessment. Project management standards also call for assigning responsibility and accountability for ensuring the results of program activities are carried out. Developing a project plan could also help CBP ensure that it is well-positioned to conduct a comprehensive training needs assessment in 2013 for its incumbent officers—while allowing for monitoring and oversight of the staff efforts through the completion of interim milestones to ensure progress in being made as intended. CBP has designed its training program for newly-hired CBP officers to comply with its standards. Such compliance can contribute to ensuring that newly-hired officers are prepared to accomplish CBP’s mission of securing the border and simultaneously facilitating the cross-border movement of millions of legitimate travelers and billions of dollars in international trade. However, CBP faces challenges in ensuring that the training needs of its nearly 20,000 incumbent CBP officers are properly identified and addressed. The results of its covert tests are not generalizable to the entire CBP officer population. However, they reveal a consistent pattern of weaknesses among the officers tested in their ability to perform basic tasks and these weaknesses have not been corrected. CBP has no plans for assessing the effectiveness of its “Back to Basics” course and subsequent follow-on training developed in response to the covert tests. Assessing the effectiveness in improving incumbent officer performance could help CBP management know if the training is a sufficient response to the weaknesses identified by the covert tests or if additional adjustments are needed. In addition, CBP has not established policies and procedures to guide OFO’s implementation and oversight of incumbent officer training, including entry of complete and accurate data into TRAEN. Having policies and procedures to ensure that managers are fulfilling their oversight responsibilities, including maintaining accurate and complete training records, could help improve CBP’s knowledge of whether incumbent CBP officers have been properly trained. Given CBP’s commitment to reinforcing the law enforcement mindset among all CBP officers, evaluating the training needs of the current CBP officers so that they can be addressed in a timely and cost-efficient manner is important. In addition, given budget constraints on training resources throughout the government, planning accordingly to ensure that skill needs of the incumbent CBP officers are assessed could help ensure that a road map is in place for conducting such an assessment thereby ensuring that CBP’s officer workforce is equipped to meet the operational demands at the border. To improve CBP training efforts, we recommend that the CBP Commissioner take the following four actions: (1) Conduct an evaluation of the effectiveness of the “Back to Basics” and subsequent follow-on training, (2) Conduct a comprehensive assessment of its covert test results to identify the causes of and systemic issues underlying the results, (3) Establish a policy that specifies roles and responsibilities for CBP officer training implementation and related oversight, including oversight responsibilities to ensure that training records are entered in TRAEN completely and accurately and (4) Develop a plan for conducting a training needs assessment to address any skill gaps for incumbent CBP officers and then implement that plan. We provided a draft of the sensitive version of this report to DHS for comment. DHS provided written comments which are reprinted in appendix III. In commenting on the sensitive version of this report, DHS, including CBP, agreed with the recommendations. Specifically, DHS stated that CBP is taking action or has taken action to address each recommendation. DHS agreed with the first recommendation that CBP conduct an evaluation of the effectiveness of the training course and subsequent follow-on training, and stated that the Office of Field Operations and the Office of Training and Development will work in partnership to determine if the Back to Basics and follow-on training had an effect on overall CBP officer performance by conducting a study and obtaining the results of any further covert tests by March 30, 2012. Regarding the second recommendation that CBP conduct a comprehensive assessment of its covert test results, DHS agreed and stated that the Office of Internal Affairs plans to conduct a comprehensive assessment of its covert test results for fiscal year 2011 by December 30, 2011. DHS agreed with the third recommendation that CBP establish a policy that specifies roles and responsibilities for CBP officer training implementation and related oversight, and stated that a policy will be developed by March 30, 2012, to clarify the training roles and responsibilities at all national and local levels to include the responsibility for maintaining accurate training records. Regarding the fourth recommendation that CBP develop a plan for conducting a training needs assessment to address any skill gaps for incumbent CBP officers and then implement that plan, DHS stated that OFO is coordinating with OTD to evaluate current training to identify any existing training gaps, and plan to address any identified needs through formal training by December 31, 2012. If effectively implemented, these actions should address the intent of the recommendations. DHS raised an issue regarding the report’s characterization of the DHS covert test results. Specifically, DHS stated that the covert tests were deliberately designed to test only a specific aspect of the overall primary inspection process within the wide range of inspectional duties that CBP officers perform and are not a valid measure of overall officers’ performance and capabilities or reliability of the entire admissibility process. The report noted that the test results are not generalizable to all ports of entry. However, OFO officials emphasized that the tests are informative in that they can help management identify possible weaknesses and vulnerabilities in the inspection process and in the CBP officer ability to perform basic tasks. Specifically, the tests are designed to provide a snapshot of the level of a port’s performance related to the testing objectives. Further, according to the protocols, the tests are designed to test and challenge CBP officers on their abilities, adherence to policies and procedures, and use of technologies to detect and prevent individuals attempting to enter the United States through the use of document fraud. OFO also developed and mandated an annual fraudulent document course based on the initial response to the covert test results. Nevertheless, we incorporated language throughout the report to clarify the objectives and the scope of the covert tests. We believe that this report presents a valid characterization of the covert test results and their potential uses. DHS also provided technical comments, which we incorporated into the report as appropriate. We will send copies of this report to the Department of Homeland Security, the Commissioner of U.S. Customs and Border Protection, the appropriate congressional committees and other interested parties. In addition, the report will be available at no charge at GAO’s website https://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-8816 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other GAO contacts and staff acknowledgments are listed in appendix IV. In February 2009, Customs and Border Protection (CBP) management and the union that represents CBP officers reached an agreement to allow CBP officers the opportunity to bid and rotate to a new work unit, specialized team, or port location every 1 or 2 years. This bid-and- rotation system for CBP officers is based on seniority and is designed to increase officer morale and retain CBP officers. The system also provides port management the ability to assign CBP officers to units based on immediate, changing workload demands. The following list includes specialized teams that operate at all U.S. air, land, and sea ports of entry. Ports may have additional CBP officer specialized teams depending on their size, environment, and mission demands. These are the passenger work units charged with the interdiction of high- risk passengers attempting to facilitate surreptitious entry of contraband or who may be associated with terrorist activities. These are the work units charged with the use of automated systems to target high-risk passengers, conduct threat analysis, or using after-action reports to identify threats. This work unit is charged with processing violations to the Immigration and Nationality Act (INA) which may result in adverse actions, such as determination of inadmissibility to the United States. This work unit is charged with targeting and examining outbound commercial cargo for violations of law, rules or regulations. This work unit is charged with the use of automated systems to target high-risk commercial shipments, conduct analysis, or use after-action reports to identify threats. This work unit is charged with the inbound and/or outbound interdiction of narcotics and other contraband, including currency, arms and ammunition, as well as terrorist related materials in the cargo and/or passenger environments. This work unit is charged with providing nationally mandated and locally designed training including, but not limited to pre-academy, postacademy, virtual learning center, and unification training. This work unit is charged with the scheduling of all regular day and overtime assignments, as well as the administration of the Customs Officers Pay Reform Act, including, but not limited to, overtime cap compliance and annuity integrity. This work unit is charged with developing and conducting local firearms programs, including but not limited to, conducting required firearms qualifications and use of force training, maintaining quantities of firearms- related supplies and equipment, and conducting annual firearms inventory. The following mandatory courses are required either on a one-time basis or annually for all nonsupervisory Customs and Border Protection (CBP) officers. The specialized courses were developed to enhance incumbent nonsupervisory CBP officer skills in specific areas and are not mandatory. In addition to the contact named above, Michael Dino, Assistant Director, Kathryn Bernet, Assistant Director, and Nanette J. Barton, Analyst-in- Charge, managed this assignment. Jennifer Bryant and Edith Sohna made significant contributions to the work. Stanley Kostyla assisted with design and methodology. Frances Cook provided legal support. Katherine Davis provided assistance in report preparation. | Recent incidents involving potential terrorists attempting to enter the country highlight the need for a vigilant and well-trained workforce at the border. U.S. Customs and Border Protection (CBP), within the Department of Homeland Security, is the lead federal agency in charge of inspecting travelers and goods for admission into the United States. About 20,000 CBP officers play a central role in ensuring that CBP accomplishes its mission of securing the border while also facilitating the movement of millions of legitimate travelers and billions of dollars in international trade. GAO was asked to assess the extent to which CBP has (1) revised its training program for newly hired CBP officers in accordance with training standards and (2) identified and addressed the training needs of incumbent CBP officers. GAO analyzed data and documentation related to the agencys training efforts, such as its covert test program and its training records. GAO also interviewed CBP officials and CBP officers. This is a public version of a sensitive report that GAO issued in October 2011. Information CBP deemed sensitive has been redacted. CBP revised its training program for newly hired CBP officers in accordance with its own training development standards. Consistent with these standards, CBP convened a team of subject-matter experts to identify and rank the tasks that new CBP officers are expected to perform. As a result, the new curriculum was designed to produce a professional law enforcement officer capable of protecting the homeland from terrorist, criminal, biological and agricultural threats. In addition, the curriculum stated that the CBP officer is to draw conclusions and take appropriate action to identify behavioral indicators displayed by criminals, effectively interview travelers to identify potential threats, identify fraudulent documents, and use technology in support of the inspection process. CBP has taken some steps to identify and address the training needs of its incumbent CBP officers, but could do more to ensure that these officers are fully trained. GAO examined CBPs results of covert tests conducted over more than 2 years and found significant weaknesses in the CBP inspection process at the ports of entry that were tested. In response to these tests, CBP developed a Back to Basics course in March 2010 for incumbent officers but has no plans to evaluate the effectiveness of the training. Moreover, CBP has not conducted an analysis of all the possible causes or systemic issues that may be contributing to the test results. Further evaluation of the training and causes underlying covert test results could help inform CBP about whether the training is sufficient to address the weaknesses identified by the covert tests or if adjustments are needed. In addition, CBP offices are responsible for recording their employees training records; however, CBP does not have a policy that assigns responsibility to port management to ensure that their staff enter data into its training records system completely and accurately. A policy outlining the roles and responsibilities of offices and positions for training could help clarify which offices and positions are responsible for identifying and addressing training needs and for holding these offices accountable for their responsibilities. Moreover, CBP currently does not have reliable training completion records to ensure CBP officers received required training or other training relevant to their assigned duties. Based on GAOs analysis of training records, more than 4,000 customs officers have not completed the immigration fundamentals, immigration law, and agricultural fundamentals courses, although they were required to complete them during a cross-training program. According to CBP, the training completion records are incomplete, and it is unlikely that the officers did not complete the required cross-training. Nevertheless, without reliable training records; CBP cannot provide reasonable assurance that all customs officers completed the required cross-training. Further, CBP has not conducted a needs assessment that would identify any gaps between identified critical skills and incumbent officers current skills and competencies. A needs assessment could enhance CBPs ability to ensure its workforce is training to meet its mission. To improve CBP training efforts, GAO recommends that the CBP Commissioner evaluate the Back to Basics training course; analyze covert test results; establish a policy for training responsibilities, including oversight of training records; and, conduct a training needs assessment. CBP concurred with the recommendations and is taking steps to address them. |
WIA repealed, after 16 years, the Job Training Partnership Act, and in doing so, introduced various reforms to the coordination and delivery of federally funded employment and training services. Program year 2000 was the first year in which states and localities operated programs under WIA. WIA’s reforms affected youth as well as adult and dislocated worker services. Among the most significant changes to youth services was the consolidation of JTPA’s two separately funded youth programs—the Title II-B Summer Employment and Training Program and the Title II-C year- round training program—into a single year-round program under Title I-B of WIA with a fiscal year 2001 funding level of $1.4 billion. DOL estimated 721,000 youth participants would be served in program year 2001. JTPA’s summer employment program, with a 1999 funding level of $870 million, was significantly larger than the JTPA year-round program funded at $130 million. As a result of consolidating JTPA’s two youth programs, summer employment became one of the many youth services under WIA. Youth services under WIA are intended to be more comprehensive and longer-term than under JTPA, while offering local areas the flexibility to tailor services to meet the needs of individual youth. While both JTPA and WIA required that youth receive appropriate services based on an assessment of their service needs, WIA mandated that 10 youth services, referred to as program elements, be made available to all eligible youth. (See table 1.) Under JTPA, several of these program elements were either optional or not present. For example, leadership development and the 12 months of follow-up services upon program completion are new under WIA. In addition to merging JTPA’s summer and year-round programs, WIA targets services to a youth population that is potentially lower income than that targeted under JTPA. While both programs included the same income eligibility ceiling, JTPA also granted eligibility to youth who participated in the free- and reduced-price school lunch program, which had a higher income eligibility ceiling than that under WIA. JTPA also allowed a greater percentage of non-low-income youth than WIA, 10 percent compared to 5 percent. Furthermore, by requiring that 30 percent of WIA youth funds be spent on out-of-school youth, WIA targets young people who are no longer attending any school, including an alternative school. While JTPA’s year- round program also required serving some out-of-school youth, the summer employment program did not. For more information on how WIA and JTPA differ in their key youth provisions, see appendix I. The WIA youth appropriation consists of formula funds, which states receive and allocate to their local workforce investment areas, and Youth Opportunity Grants, which DOL awards to local areas on a competitive basis. States are required to allot at least 85 percent of the youth formula funds to local areas based on criteria that include the number of disadvantaged youth in each local area compared to the total number of disadvantaged youth in the state. In addition, states shall set aside up to 15 percent of the youth funds for statewide youth activities, which include disseminating a list of eligible youth service providers. WIA permits states to combine the set-aside from the youth allotment with similar set-asides from their adult and dislocated worker allotments. However, local boards are prohibited from transferring formula funds from the WIA adult and dislocated worker programs to the youth program or vice versa. In addition, new under WIA is the requirement that youth services be made available through the one-stop system. One-stop centers can serve as the entry point for all youth in the local area, providing universal access to information and services. These centers are gateways to services for WIA-eligible youth as well as services funded for non-eligible youth who may also receive services at one-stop centers such as job searches, career exploration, use of career center resources, and information on and referrals to other youth providers. WIA also strengthens accountability by establishing younger and older youth performance indicators for all youth receiving WIA services, including those receiving summer employment services, and by establishing customer satisfaction indicators for participants and employers. In contrast, JTPA did not establish any performance indicators for the summer employment and training program. States must negotiate and reach agreement on their expected levels of performance with the Secretary of Labor. Similarly, local areas must negotiate and reach agreement with the governor on local levels of performance. Furthermore, WIA holds states accountable for achieving their performance levels by linking them with financial incentives or sanctions. Lastly, WIA youth activities are coordinated through newly created state and local workforce investment boards. The state board is established by the governor to carry out statewide youth activities and to develop the state strategic plan. The 5-year plan must describe the state’s strategy for providing comprehensive services to eligible youth, identify criteria local boards use to award grants and select providers, and describe coordination with other youth programs. The majority of state board members, including the board chair, must come from private business. The governor also certifies local boards to, among other duties, develop the local plan and select one-stop operators and youth service providers. Like state boards, the majority of local board members and the chair must come from private business. Among WIA’s most significant reforms is the requirement that local boards establish a youth council as a subgroup of the board, to coordinate and oversee the local WIA youth program (see table 2). While the youth council’s membership must reflect a broad cross- section of community representatives, youth councils do not require membership from educational entities. While not a mandatory member of workforce investment boards and youth councils, state and local school-based career programs, including School- to-Work (STW) programs, complement the youth development system envisioned under WIA by linking education with workforce and by engaging a broad range of community representatives in designing and implementing a comprehensive, integrated system of education and workforce preparation that reflects local labor market needs. Like WIA, STW promotes classroom teaching that is more closely linked with the workplace to help both in-school and out-of-school young people prepare for postsecondary education, advanced training, and careers. Three components form the core of STW programs: school-based learning, work- based learning, and connecting activities. First, school-based learning refers to instruction and curricula that integrate academic and vocational learning. Second, work-based learning includes job training and work experiences that coordinate with classroom learning, workplace mentoring, and instruction in general workplace competencies as well as all aspects of an industry, leading to the awarding of a skill certificate. Third, connecting activities refer to the range of activities to integrate school and work and include matching students with employers and mentors, linking participants with community services, providing technical assistance to schools and employers, and connecting youth development strategies with employers’ strategies for upgrading workers’ skills. As the entity responsible for implementing WIA, DOL has issued guidance and provided assistance on various technical aspects of WIA’s implementation. For example, through its Training and Employment Guidance letters, DOL has provided guidance to state and local boards on a number of topics, including how to integrate the summer and year-round youth programs, provide comprehensive youth services, and identify sources of funding for youth services. In addition, DOL has sponsored national and regional conferences that serve as a forum to educate local boards and youth councils on implementing WIA’s youth provisions and to share information on promising practices. In emphasizing state and local flexibility, DOL guidance has been very broad, and the establishment of specific policies has been delegated to states and local areas. With few exceptions, local workforce investment boards implemented WIA’s required youth provisions by establishing youth councils and a network of youth service providers, despite some implementation challenges. We found that nearly all of the youth councils were active by the start of program year 2000—the first WIA program year—and a majority of councils included the WIA-required members. However, a number of the local boards reported difficulty in recruiting youth and parents to serve on the council. To establish a network of youth service providers, youth councils recommended service providers to their local boards through the competitive selection process and developed strategies for connecting youth to the one-stop service delivery system, although officials in some local areas we visited described difficulties in doing so. Most local boards reported that their contracted youth service providers served youth directly rather than through the one-stop centers. Moreover, many boards used WIA’s flexibility to expand their services and move toward a comprehensive youth development system. These efforts included appointing optional representatives on the youth council such as those from private industry, establishing youth-exclusive one-stop centers, and securing additional non-WIA funding to increase their capacity to serve a broader group of youth, some of whom would not be WIA-eligible. Nationally, virtually every local workforce investment board established a youth council, and 78 percent had done so by July 1, 2000, when the first WIA program year began. In fact, 72 percent of the boards implemented the youth council requirement in the year preceding July 1, 2000, in anticipation of WIA. Also, by the end of the first program year, nearly all youth councils had held at least one meeting since their inception, with the average number of meetings held being eight. Most youth councils (70 percent) had between 11 and 25 members, and the councils as a whole averaged about 20 members. (See fig. 1.) In addition, more than half of the local boards reported that most or all of the youth council members typically attended the youth council meetings, and 36 percent said that about half of the members attended. Finally, 56 percent of all local boards reported that their youth council membership included all four categories of WIA-required members asked about in our survey. Among the WIA-required members, personnel experienced in youth activities were represented on the greatest proportion—93 percent—of youth councils. In contrast, parents of WIA-eligible youth were represented on the lowest proportion (about 71 percent) of youth councils. Board officials and service providers in many local areas we visited stated that WIA boards and youth councils were important to coordinating a broad array of youth services in the community and leveraging resources. Board officials in Sonoma County, California, for instance, told us that the youth council brought key stakeholders to the table for the first time, including representatives that had seldom collaborated with each other, such as those from the juvenile justice and school systems. Service providers in San Jose, California, and Cheyenne, Wyoming, stated that the youth council meetings were a good forum for sharing information and learning how providers could complement one another’s youth services to eliminate service gaps or duplication. In addition, board officials in Madison, Wisconsin, told us that the large membership size of the local board and youth council offered the potential to leverage additional community resources. Establishing youth councils, however, was not without its challenges. Nationwide, 65 percent of local boards reported difficulty in getting youth members and 54 percent found it difficult to get parents of eligible youth to participate on the council. One local board official we visited told us that securing youth participation on the council was challenging, in part because youth lacked transportation to youth council meetings, found it intimidating to attend large meetings dominated by adults, or had class and work schedules that conflicted with council meetings. A state board official said that parents of WIA-eligible youth, often low-income themselves, were also difficult to recruit onto councils because they could not attend council meetings without taking unpaid time off from work. To establish a network of WIA youth service providers, local boards competitively selected youth service providers based on youth council recommendations, but some boards reported that their youth councils found it difficult to obtain multiple responses to the requests for proposals (RFPs.) Nationwide, 80 percent of youth councils issued competitive RFPs in program year 2000, and most of those that issued the RFPs identified between 2 and 12 eligible service providers. About 10 percent of the councils that issued RFPs reported that they identified only one eligible provider. While youth councils received responses to their RFPs, generally there was little competition for service provider contracts in many local areas. We found that 63 percent of the councils recommended to the local board for its approval the same number of service providers as they had identified through the RFP selection process. In addition, 95 percent of local boards that received recommendations from their youth councils selected all of the recommended providers. A local board official in Milwaukee, Wisconsin, told us that, while the board selected the same providers that had served youth under JTPA, the youth council wanted to encourage new providers to apply for WIA service contracts, including private sector providers. Most local boards reported that contracted service providers generally served youth directly at the providers’ facilities rather than at the one-stop centers in their local areas. In most of the one-stop centers we visited, youth were served alongside adults. In general, the centers featured a self- service resource room equipped with personal computers, phones, or other job search aids, as well as office space for one-stop staff and agency partners to offer a variety of employment, training, and social services. Employers conducted job interviews at some one-stop centers, and officials at the rural New Jersey one-stop we visited told us that the state offered employers financial incentives for hiring one-stop clients. A few of the one-stops offered a child playroom or an adaptable computer workstation for disabled users, and in two of the centers, staff members of the various partner agencies were dispersed throughout the office space to promote their interaction and seamless service delivery. Most local boards at the sites we visited required contracted service providers to make available all 10 required program elements to youth enrolled in WIA programs. For example, one WIA provider in rural Wisconsin delivered all 10 elements in a long-term, year-round program for out-of-school youth. In the program, 16- to 24-year-olds worked in teams to build or refurbish low-income housing. At the building sites, the participants received paid employment, occupational skills training, leadership opportunities, and mentoring from an adult supervisor. When not at the sites, they received classroom instruction to prepare for their high school equivalency credential, career counseling, and a variety of support services, such as health care, meals, and mental health counseling. Upon exiting the program, selected participants received monthly follow- up services for at least two years. Even though most youth councils reported that they issued RFPs, one of the challenges local areas—often rural ones—faced was in obtaining multiple responses to their RFPs. For example, state board officials in North Dakota said that the limited number of service providers in the state’s sparsely populated and spread-out rural areas necessitated the use of the one-stop center to serve WIA youth and prompted state officials to seek a waiver from DOL to the competitive selection requirement for those local areas. Other state and local WIA officials in both rural and urban areas stated it was difficult to identify qualified service providers due to providers’ lack of experience in delivering WIA’s broader range of mandatory services and greater emphasis on serving out-of-school youth compared to JTPA. To develop providers’ qualifications, the local boards in Middlesex County, New Jersey, and Miami, Florida, conducted regular workshops to educate providers on their new expectations under WIA. In addition, some state and local WIA officials told us that some of the 10 program elements, such as mentoring and the 12-month follow-up, were difficult or costly to deliver and discouraged service providers from responding to the RFPs. To mitigate potential disincentives for service providers, local board officials in Orange Park, Florida, said that they planned to have one-stop staff rather than service providers conduct follow-up, which would also help link youth to the one-stop system, and local board officials in Madison, Wisconsin, told us they planned to coordinate some WIA follow-up services with those of non-WIA programs, such as Temporary Assistance for Needy Families (TANF). Even though one-stop centers offered WIA youth services, another challenge faced by most local areas we visited was attracting youth to the one-stop centers, and these areas had developed outreach strategies to bring youth into the centers. Unless referred to or brought into the one- stop centers by schools and other service providers, youth typically did not come into the centers on their own. In some areas, such as rural Wisconsin, public transportation to the one-stop center was not available. One service provider we interviewed was reluctant to send youth to the one-stop because the services were geared primarily toward adult clients or youth might have felt uncomfortable mingling with the adult clientele. Nationally, local boards were engaged in efforts to link youth to one-stops, and nearly three-quarters of boards did so by recruiting youth to the centers. (See fig. 2.) Local areas we visited had also developed various strategies to link youth to one-stop centers. For example, the one-stop center in rural Wisconsin we visited conducted job fairs and was authorized to hand out work permits—a prerequisite for younger youth to obtain employment. The local board in Middlesex County convened focus groups with youth to identify ideal locations for a one-stop center and youth services that should be provided there. The one-stop center we visited in rural Florida was located inside a shopping mall and was considering advertising its services in the mall’s movie theater because it was frequented by youth. Recognizing one-stop systems’ adult focus, DOL announced in September 2001 that it had awarded competitive grants to 15 local boards and youth councils to develop and implement strategies to improve youth connections to the one-stop system, which DOL plans to disseminate in a technical assistance guidebook after the project’s completion sometime this year. Most youth councils exercised the flexibility provided by WIA by expanding their membership to include optional representation. For example, 80 percent of youth councils include one or more members from the private sector—the most frequent group (36 percent) to chair the youth council. (See fig. 3.) Other optional members included organized labor and vocational rehabilitation representatives. Local board officials in Cumberland/Salem County, New Jersey, told us that having co-chairs from private industry helped them connect with area employers, leverage additional youth funding, and have greater knowledge of the local labor market. Board officials in several local areas noted, however, that getting business to fully participate on youth councils was still a challenge, in part because business members were reluctant to contribute resources or were accustomed to making policy decisions, not merely serving in an advisory capacity to the local board. A few local boards—nearly 5 percent nationally—reported having established one or more one-stop centers that served only youth. In Miami, for example, the youth one-stop centers we visited were either co- located with or were adjacent to the comprehensive one-stop centers. The one-stop operators told us that this arrangement gave them the flexibility of referring youth that were otherwise ineligible for WIA youth services to the comprehensive center. The youth one-stop centers were also electronically linked with other service providers and one-stop centers in the community. Milwaukee, Wisconsin, opened a new youth one-stop center in February 2002, featuring a lounge area, recreation, childcare, as well as youth-specialist staff cross-trained in all the one-stop partner programs and services in order to promote more seamless service delivery. Local boards also exercised their flexibility under WIA to expand their capacity to serve both WIA and non-WIA at-risk youth by leveraging additional resources to supplement their WIA formula grant. Nationally, 50 percent of local boards reported having non-WIA funding available in program year 2000 for youth activities. The extent to which non-WIA funding supplemented WIA Title I-B youth funding varied by type of local workforce investment area. Rural areas were less likely than nonrural areas to receive non-WIA funds. For example, in local areas that described their workforce investment area as a portion of a rural area, non-WIA funding represented, on average, an additional $375,000 or 50 percent of the WIA Title I-B grant, compared to an additional $941,000 or 83 percent in nonrural areas. To supplement WIA youth funds, several state and local board officials told us that they were combining WIA with funds from TANF or other programs. For example, Pennsylvania used state TANF dollars to award competitive grants to local boards to serve both WIA and non-WIA youth. While built around WIA’s 10 program elements, the grants encouraged local areas to design innovative approaches to serving all youth but also required them to identify ways of sustaining the programs given that the availability of grant funding was uncertain. Furthermore, the youth council in Orange Park, Florida, encouraged the leveraging of non- WIA resources in its RFPs to service providers. To establish linkages with the education community, most youth councils included local educators and STW representatives as either members or chair of the council, even though these members were not mandated under WIA. Moreover, secondary and postsecondary schools were contracted to provide youth services, typically delivering services at the schools, or partnered with the one-stop centers to deliver youth services. However, some youth councils found it difficult to partner with the education community due to the absence of a shared vision of youth development. In these communities, some school personnel were reluctant to incorporate workforce development activities into classroom learning because they did not want to broaden their role in youth development beyond education. Both youth council officials and educators expressed a need for additional technical assistance to strengthen linkages between the education and workforce communities. Nationwide, most youth councils established linkages with the education community by including educators on their youth councils, even though they were not mandated youth council members. For example, 94 percent of local workforce investment boards reported that school district personnel served on their youth council, while 79 percent reported that STW representatives were on the youth council. In addition, we found school district representatives chaired 20 percent of the youth councils, and 13 percent were chaired by an STW representative. A majority of the local workforce investment boards we surveyed reported it was easy to get educators to participate on the youth council. In some of the local areas we visited, educators who were members of their local STW committee easily transitioned to the WIA youth council. In Miami, for example, many members of the STW committee served as members of the youth council even though additional youth council members were appointed to meet WIA’s membership requirements. Furthermore, in Sonoma County, the youth council established linkages with the education community by serving as a committee to both the local workforce investment board and STW board. In both of these communities, the local boards and youth councils credited their partnership with STW for strengthening their relationship with the schools. In all the sites we visited, youth councils developed various strategies to link with the education community including contracting with schools as service providers and partnering schools with the one-stop centers to deliver youth services. (See table 3.) Most of the local workforce investment boards we visited awarded service contracts to secondary or postsecondary schools that either provided youth services directly or in collaboration with other education providers or community-based organizations. For example, an education provider we visited in Cumberland/Salem County, New Jersey, collaborated with local school districts, universities, and private businesses to operate a program designed to help youth explore careers in the food industry. During the summer portion of the program, 30 in-school youth between the ages of 14-16 learned basic job skills in the classroom, took organized field trips to farms and food businesses, and acquired work experience at participating local food businesses and restaurants. During the remainder of the school year, students were placed in paid internships within the food industry and received mentoring services from employers as well as ongoing career counseling from their school. In Milwaukee, the local board contracted with the University of Wisconsin-Milwaukee to provide a 6-week computer technology program for in-school youth between the ages of 15-19. On Saturday mornings, participants attended classes in word processing, slide presentation, and web page development at the college campus. Upon completion of the computer courses, participants were then enrolled in a six-week program in life skills and learned how to balance school with work, prepare for the workforce, and manage interpersonal working relationships on the job. Most of the one-stop centers we visited established linkages with the education community by partnering with schools to provide services to youth. For example, some local high schools brought students into the one-stop centers to learn about available services or to participate in career fairs. To link schools with one-stop centers, staff from the Milwaukee youth-only one-stop traveled to high schools to conduct computerized assessments and help them develop career plans for WIA participants. Some educators remained cautious about increasing their involvement in providing WIA youth services. First, some educators believed that WIA’s vision for providing comprehensive youth development services to at-risk youth was inconsistent with the traditional mission that schools generally embraced, which was to provide academic services to all youth. In Milwaukee, for example, some schools were reluctant to allow WIA youth services to be provided at the schools because of the perceived stigma associated with WIA services being targeted to low-income, at-risk youth. In California and Florida, some educators said schools in their areas emphasized increasing student academic achievement and standardized test scores, rather than promoting students’ exposure to career exploration. Consequently, some educators were reluctant to incorporate workforce development activities into classroom learning, particularly where student academic achievement was tied to sanction and incentive policies. Second, some education providers we visited stated that the costs of providing WIA youth services outweighed the benefits of education’s participation. For example, a school official in rural Wisconsin told us that meeting WIA’s requirement to conduct a minimum 12-month follow-up and reporting on participant outcomes was resource intensive for the school and demanded administrative time that could be better spent on direct service delivery. Furthermore, workforce investment officials in Delaware and Illinois stated that colleges were required to report performance data on all enrolled students, in addition to WIA students. According to these officials, this reporting requirement increased the colleges’ paperwork burden and costs relative to the amount of WIA funding they received, creating a financial disincentive for colleges to provide WIA services. Many local board, youth council, and education officials we interviewed said having more formal technical assistance on how to create successful partnerships with one another would improve the linkages that WIA has helped to create between the workforce and education communities. For example, youth council members in Middlesex County expressed a need for strategies to help the council effectively communicate to the education community that schools could play an important role preparing youth for the workforce. In addition, some workforce and education officials we visited expressed a need for examples of promising practices used by others to strengthen the links between the one-stop centers and schools. Two factors facilitated implementation of WIA’s youth provisions, while some WIA requirements impeded implementation or service delivery. Experience in collaboration among youth-serving agencies and a high priority placed on youth development activities by state officials facilitated implementation. Workforce officials told us that these factors enabled them to work more cooperatively and with a wider range of community providers in coordinating and delivering youth services. However, workforce officials also stated that implementation progress and service delivery were inhibited by requirements to document eligibility and to spend 30 percent of WIA youth funds on out-of-school youth services and by unclear youth performance indicators. Two factors enabled state and local workforce officials to work collaboratively with representatives and improve coordination and delivery of youth services—experience in collaboration and priority placed on youth development. Many state workforce officials we interviewed were already experienced in collaborating with state and local agencies, local boards, and youth-serving organizations. In New Jersey, for example, state officials told us that WIA’s requirements to establish partnerships did not represent a significant shift because many state and local youth-serving agencies were already working together to share information and provide services. Officials in most of the local areas we visited characterized the collaboration among the service providers, local board and youth council, and youth-serving agencies as strong due primarily to their longstanding relationships. Likewise, some organizational structures facilitated WIA implementation by encouraging collaboration. A number of state officials we interviewed told us they consolidated some state workforce, education, or human service functions prior to WIA’s implementation in order to streamline and improve coordination and delivery of youth services. For example, Michigan began consolidating its state workforce development programs in the early 1990s. A single department now administers WIA as well as a variety of other workforce and education programs such as TANF, Welfare-to-Work, Wagner-Peyser employment services, vocational rehabilitation, secondary and postsecondary career and technical education, and adult education. According to state WIA officials, this consolidated structure helped them to sidestep potential turf struggles and maximize service resources available to help many populations, including youth, by coordinating diverse programs. Second, we found the high priority placed on youth development activities also facilitated implementation. For instance, 15 states had established state-level youth councils, in part, to assist local boards in implementing the youth provisions. In Colorado and Illinois, state youth council members mentored local youth council members, provided technical assistance, and helped local youth councils leverage resources. In addition, we found that 34 states had allocated a portion of the Governor’s 15 percent set-aside to WIA youth activities in program year 2000. In California, for example, the state used part of its 15 percent set-aside on a youth development and crime prevention initiative that offers alcohol and drug treatment, mental health counseling, job training and employment opportunities, and mentoring to at-risk youth. Oregon state board officials told us they spent some of their youth set-aside to help service providers deliver mentoring, summer employment, and follow-up youth services. While WIA encouraged state and local areas to implement new approaches, it also included some requirements that made implementation difficult and impeded service delivery. State and local board officials were concerned with collecting documentation needed to verify eligibility for WIA youth services, spending at least 30 percent of WIA youth funds on out-of-school youth, understanding and measuring youth performance indicators, and meeting partnering requirements. The challenge of meeting these requirements often hindered implementation, excluded potentially eligible youth from participating in WIA services, and diverted resources away from direct service delivery, according to local officials. A majority of state and local officials we interviewed or visited told us that documenting low-income eligibility was difficult to accomplish and resource-intensive. The law specifies that youth must be low-income and face one or more barriers to employment to be eligible for WIA youth services. (For more information on the barriers, see app. I.) State and local officials told us that many at-risk youth were unable or unwilling to provide pertinent documentation of their income eligibility, such as their parents’ paycheck stub or tax return. In Orange Park, local board officials stated that obtaining documentation from at-risk youth was difficult, particularly for youth being raised by a single parent or grandparents or homeless youth. Service providers in Middlesex County, New Jersey, said that at-risk youth did not necessarily have a good relationship with their parents, compounding the difficulty of obtaining documentation. They added that getting documentation was also difficult in cases in which parents mistrusted service providers whom they perceived as prying into their financial affairs. Consequently, the most at-risk youth were the least likely to be able to provide documentation to verify their eligibility for needed services, according to local board officials. In addition, local board officials said obtaining necessary documentation was time consuming and diverted financial and staff resources away from direct service delivery. One local board in Florida terminated a youth program because of the high administrative costs of documenting eligibility. Officials at this local board estimated that, with the change in eligibility requirements from JTPA, the number of documents increased from 1 to 21 and the processing time increased from less than 2 hours to between 10 and 20 hours per participant. These additional hours could have been better spent in delivering services rather than processing paperwork, according to the officials. Some state and local board officials told us that they preferred using the free-and-reduced-school-lunch program’s income criterion under JTPA because it was more efficient and cost effective to use existing documentation, usually a single list compiled by the schools. Some states, however, had developed strategies for addressing the concern over documentation. California, Pennsylvania, and Texas, for example, developed technical assistance guides listing procedures for documenting and verifying participant eligibility. To document that a youth met the deficient-in-basic-literary-skills eligibility requirement, for instance, the Texas guide identified acceptable forms of documentation, which included results of a generally accepted standardized test, school records, and verification by telephone. DOL is in the process of finalizing guidance concerning eligibility documentation and projects that policy guidance will be issued later this year. WIA requires 30 percent of local WIA youth funds be spent on out-of- school youth, but many local officials said that recruiting and retaining sufficient numbers of these youth was challenging for a variety of reasons and hindered implementation efforts. For example, in Madison, Wisconsin, and Cumberland/Salem County, New Jersey, officials said it was more difficult to locate and follow-up on this “hidden population” in contrast to in-school youth, who could be tracked through the education system. Additionally, DOL officials told us that many out-of-school youth get employment, which may make them ineligible for WIA programs because their income is too high. Finally, WIA officials in one local area told us that it was difficult to retain out-of-school youth in WIA programs because they were typically more motivated to get a job than to acquire the academic skills needed to prepare them for further education or careers. Some local areas had developed innovative ways of recruiting and retaining out-of-school youth. In Miami and Milwaukee, for example, the local boards established youth-only one-stop centers so that out-of-school youth could come into a youth-friendly facility. In addition, local officials in Miami told us that youth caseworkers went to malls and other areas frequented by out-of-school youth to recruit program participants. Service providers in Cheyenne described a youth-friendly facility, which served youth who were already in or were transitioning from foster care or who had been in an out-of-home placement. The facility also provided a job preparation program for WIA participants. Milwaukee board officials told us they planned to staff their new youth-only one-stop center with out-of- school youth specialists. Finally, a service provider in rural Wisconsin collaborated with the juvenile justice and school systems to help recruit out-of-school youth. DOL plans to issue guidance on recruiting and retaining out-of-school youth in April 2002. Another challenging WIA requirement identified by state and local officials was measuring youth performance indicators and setting performance goals. State and local WIA officials reported difficulties in measuring some of the performance indicators because of ambiguous definitions and problems with data availability. For example, Illinois state board officials said that unclear definitions of the credential and skill attainment indicators could lead to inconsistent reporting of outcomes among states. While DOL officials told us they developed the definition of some youth indicators in collaboration with the Department of Education, they added that some measures were defined very broadly to give states flexibility in implementing performance accountability systems. For example, DOL allows state and local areas to determine what constitutes a credential and to develop—with employer input—-a statewide list of approved credentials. DOL officials acknowledged that some states defined credentialing and skill attainment more broadly than others. Additionally, several WIA officials said that, because some of the measures are based on Unemployment Insurance wage records, there was typically a 6-9 month lag before the data were available, making it difficult for boards to use the indicators to plan strategically or evaluate service provider performance. According to state and local officials, ambiguous definitions and lags in data availability complicated the measurement and reporting of some WIA youth performance indicators and resulted in inconsistencies in reporting and comparing outcomes within and across states. Furthermore, state and local officials reported that the youth performance goals were set at unrealistic levels—usually too high—because they were established without input from state and local officials and were derived from unreliable baseline data. Officials in several state and local areas we visited or contacted said they had little or no input into their performance goals during the negotiation process. DOL officials acknowledged that input was limited because some youth measures were new under WIA or new to DOL, and the agency lacked adequate time to negotiate goals from the local level up to the state level as it had intended. In addition, performance goals reflected baseline data from JTPA and the experience of a limited number of early implementation states. Also, some state officials we interviewed reported that the performance goals did not take into account states’ individual circumstances. DOL issued guidance in February 2002 on renegotiating performance levels. In the guidance, DOL noted that limitations in JTPA baseline data used to project performance levels for program years 2001 and 2002 satisfied one of the conditions for requesting revisions to earlier negotiated performance levels. WIA requires state and local workforce boards and youth councils to collaborate with a host of other partners such as public youth-service agencies, labor organizations, and community-based organizations. The law envisions these entities becoming board and council members, one- stop partners and operators, and service providers. While we found that many of these agencies did indeed participate on youth councils and deliver WIA services, state and local WIA officials said that collaborating among the different agencies was difficult and frustrating, and they lacked strategies to effectively partner with these agencies. For example, officials from one local board we visited told us that they were having difficulty finding other agencies to partner with in their efforts to implement WIA. These officials said that, while some agencies were active partners on the youth council, in the one-stop center, or as service providers, they believed the legislation did not make it easy to collaborate because it did not require other agencies to contribute resources nor did it provide local areas with the tools to enforce collaboration. Officials from another local board said that different administrative rules, definitions, and reauthorization timeframes among programs administered by the different federal agencies undermined the collaboration with which local workforce investment boards are charged. Yet we found some state and local initiatives that attempted to address these concerns and facilitate greater collaboration. A state WIA official in New York, for example, told us that the state workforce board was finalizing its plan to blend the performance measures for WIA and several non-WIA programs to promote collaboration and consistency. WIA aims to significantly reconfigure the way services for at-risk youth are structured and delivered. With its mandated requirements to form youth councils reflecting broad community representation, WIA presents a unique opportunity to make fundamental changes in the way youth services are provided—but implementation challenges remain. Establishing new governance structures, building and sustaining diverse new partnerships, designing comprehensive, coordinated programs, and delivering services seamlessly will take considerable effort from state and local workforce boards and their youth councils. State and local areas must meet implementation challenges such as getting youth, parents, and businesses to participate on youth councils, promoting competition in the service provider selection process, and serving new and difficult populations. The new performance measurement system under WIA also poses challenges for states and local areas that are concerned that ambiguous definitions of skill attainment, for example, and use of unreliable baseline data to set performance goals would result in inconsistencies in reporting and comparing outcomes. Although states and local boards welcome the enhanced flexibility WIA affords them, many are only now acclimating to their new roles and relationships in the workforce development system. However, the lack of information and technical support on a number of these new responsibilities has hindered state and local boards in fully realizing WIA’s potential. If progress is to continue, state and local workforce investment boards and their youth councils will need additional help in building a comprehensive youth development system. State and local workforce board officials, youth council members, and youth service providers have—for the most part—embraced both WIA’s broad workforce development vision and the specifics of the youth provisions. Given the scope of youth program reforms legislated in WIA and the extent of implementation to date, significant progress has been made. Federal agencies, however, need to continue to monitor progress and assess state and local needs for additional support and guidance to further facilitate implementation. The building of a comprehensive youth development system as envisioned by WIA requires active and sustained leadership—especially at the national level—and strong working relationships between the workforce development and education systems at all levels of government. While forging strong linkages between these two systems is critical in preparing youth not only for success in the classroom but also for their future careers, some local educators remain hesitant to participate in WIA youth programs. Workforce and education officials acknowledge the need for more assistance to help strengthen the partnership between these two systems. To improve the availability of information on WIA youth programmatic, administrative, and other implementation issues and to enhance implementation of state and local workforce investment systems, we recommend that the Secretary of Labor issue guidance and provide assistance to state and local boards and youth councils by developing and disseminating strategies to effectively recruit and engage parents, youth, and business community representatives on the youth council; to increase the number of responses to competitive requests for proposals by encouraging youth-serving organizations new to WIA to participate in the youth program and promoting new ways of collaboration among new and existing service providers; to obtain and verify applicant eligibility information by sharing client information among agencies or using existing electronic databases (for example, DOL should consider exploring methods to extend eligibility automatically for WIA based on an applicant’s participation in other programs);to recruit and retain out-of-school youth to the WIA youth program and all youth into the one-stops; and to facilitate linkages between the board and youth council and their required youth-serving partners. Through collaboration with the Department of Education, state education agencies, and other experts, we recommend the Secretary of Labor develop and disseminate strategies to effectively link workforce and education activities, such as exploring workplace learning principles in the classroom and connecting schools to the one-stop centers. To more objectively assess state and local area performance and youth progress, we recommend that the Secretary of Labor clarify the definition of skill attainment for younger youth to ensure consistency in reporting. We provided a draft of this report to DOL for its review and comment. DOL’s comments are in appendix II. In its written comments, DOL agreed with all our findings and recommendations, noting that they are consistent with information it has collected from state and local implementers. DOL also found the report to be instructive in assessing local implementation efforts nationwide and highlighting best practices to improve youth services. In its comments, DOL cited its efforts to work closely with state and local partners to provide guidance and best practices on the issues identified in our recommendations, including issuing a tool kit on effective youth councils, reaching out to community-based and faith-based organizations for competitive selection of providers, simplifying eligibility documentation procedures, developing a best practices website on serving out-of-school youth, integrating school-to-work lessons learned, and clarifying the definition of skill attainment. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 5 days after the date of this report. At that time, we will send copies to the Secretary of Labor, relevant congressional committees, and others who are interested. Copies will be made available to others upon request. The report is also available on GAO’s home page at http://www.gao.gov. Please contact me on (202) 512-7215 if you or your staff have any questions about this report. Other major contributors to this report are listed in appendix III. State job training coordinating council or human direct service providers unless waived Youth councils as subgroup of local board Older youth (ages 19-21) indicators include entry, retention, and earnings in unsubsidized employment and recognized credential attainment. In addition to the individuals mentioned above, Karyn Angulo, Bill Bates, Jessica Botsford, Patrick DiBattista, Julian Fogle, Joel Grossman, Jeff Rueckhaus, Rebecca Woiwode, James Wright, and Michelle Zapata made key contributions to this report. Workforce Investment Act: Coordination between TANF Programs and One-Stop Centers Is Increasing, but Challenges Remain. GAO-02-500T. Washington, D.C.: March 12, 2002. Workforce Investment Act: Better Guidance and Revised Funding Formula Would Enhance Dislocated Worker Program. GAO-02-274. Washington, D.C.: February 11, 2002. Workforce Investment Act: Improvements Needed in Performance Measures to Provide a More Accurate Picture of WIA’s Effectiveness. GAO-02-275. Washington, D.C.: February 1, 2002. Means-Tested Programs: Determining Financial Eligibility Is Cumbersome and Can Be Simplified. GAO-02-58. Washington, D.C.: November 2, 2001. Workforce Investment Act: New Requirements Create Need for More Guidance. GAO-02-94T. Washington, D.C.: October 4, 2001. Workforce Investment Act: Better Guidance Needed to Address Concerns Over New Requirements. GAO-02-72. Washington, D.C.: October 4, 2001. | The Workforce Investment Act of 1998 substantially changed the way youth workforce development services are configured and delivered. The act requires states and localities to create a more comprehensive workforce system for development needs. The act promotes partnerships among diverse programs and community representatives through participation on newly created state and local workforce investment boards and youth councils. GAO found that most youth councils nationwide included the required members and nearly all councils were active by July 2000. Local boards competitively chose youth service providers and developed strategies for one-stop centers. Most boards reported that services were provided through contracted service providers rather than one-stop centers. However, local boards had difficulty getting parents and youth to participate on youth councils. Some local areas found it difficult to identify and select youth service providers because of low response to requests for proposals. Getting youth to visit the typically adult-focused one-stop centers was also difficult. Youth councils linked with the education community by including representatives of local school districts and existing school-board career programs in their membership or as youth service providers. Moreover, secondary and postsecondary schools contracted to deliver mentoring and occupational skills training. Some educators, however, were hesitant to broaden their role in youth development beyond traditional academics and saw few financial incentives to partner with the youth council. GAO found three legislative requirements that impeded service delivery. First, eligibility documentation requirements may have excluded eligible at-risk youth from services. Second, difficulties in recruiting and retaining enough out-of-school youth to meet the 30 percent requirement that local funds be spent on these youth. Third, ambiguous definitions and lags in data availability resulted in inconsistent reporting and when comparing outcomes within and across states. |
The Joint Commission, a nonprofit organization founded in 1951, was created to provide voluntary health care accreditation for hospitals. All but one of the Joint Commission’s founding members continued to serve on its Board of Commissioners as of October 2006, including the American Hospital Association and the American College of Surgeons. The standards established by the Joint Commission address a facility’s level of performance in areas such as patient rights, patient treatment, and infection control. To determine whether a facility is in compliance with those standards, the Joint Commission conducts on-site evaluations of facilities, called accreditation surveys. The Joint Commission recognizes a facility’s compliance with its standards by issuing a certificate of accreditation, which is valid for a 3-year period. In 2004, the Joint Commission implemented a new accreditation process in an effort to encourage hospitals to focus on continuous quality improvement, rather than survey preparation. Previously, facilities were told in advance when Joint Commission surveyors would conduct their evaluations. As a part of the new process, the Joint Commission began conducting unannounced surveys. The Joint Commission employs over 900 staff members, including approximately 200 hospital surveyors from a range of disciplines—such as physicians, nurses, and hospital administrators—who conduct the accreditation surveys. In 2005, the Joint Commission accredited approximately 4,300 hospitals. The Joint Commission established JCR to provide consultative technical assistance to health care organizations seeking Joint Commission accreditation. (See fig. 1.) JCR is governed by a Board of Directors and employs approximately 180 staff members, including consultants located throughout the country. In 2000, the Joint Commission expanded JCR’s role beyond consulting to include all educational services, such as seminars and audio conferences, which the Joint Commission previously provided. (See app. II for a timeline of key developments in the Joint Commission and JCR relationship.) JCR also became the official publisher of the Joint Commission’s accreditation manuals and support materials. JCR offers consulting services either independently to health care facilities or through a subscription-based service called the Continuous Service Readiness (CSR) program, which is typically offered in partnership with state hospital associations. The CSR program provides ongoing technical assistance and education to subscribers through a variety of means, including meetings, e-mails, telephone calls, and conferences. In 2004, we reported that CMS’s oversight of the Joint Commission hospital accreditation process is limited. Although it conducts on-site validation surveys of a sample of Joint Commission-accredited hospitals, the agency cannot restrict or remove the Joint Commission’s accreditation authority if it detects problems. CMS reported that the agency and the Joint Commission engage in ongoing dialogue to identify potential hospital accreditation performance issues. In addition, CMS provides an annual report of its findings to Congress. Unlike the Joint Commission, JCR is not subject to any oversight by CMS. When developing policies regarding its relationship with JCR, the Joint Commission has been affected by the increased focus in both the public and private sectors on governance issues. The Sarbanes-Oxley Act of 2002, passed in response to corporate and accounting scandals, required publicly traded companies to follow new governance standards, including those designed to ensure auditors’ independence from their clients. Even though most provisions of the Sarbanes-Oxley Act are not applicable to nonprofit organizations, activities that have occurred in the wake of the act have affected nonprofits. For example, several state legislatures are considering legislation that applies standards similar to the Sarbanes- Oxley requirements to nonprofit organizations. In addition, some nonprofit organizations, such as the Joint Commission, have voluntarily adopted policies and altered governance practices based upon the act. Organizations in the public and private sectors have also begun to institute compliance programs and those that provide accreditation or certification services have developed standards to ensure the independence of these services. Compliance programs for health care organizations—such as hospitals, home health agencies, and medical supply companies—have used provisions of the federal Sentencing Guidelines, developed in 1991, as a program model. These guidelines lay out two common principles of adequate compliance programs—to prevent and detect criminal conduct, and to promote an organizational culture of ethics and compliance with the law. In 1998, the HHS Office of Inspector General developed a model compliance program for hospitals. Regarding independence standards, organizations that provide accreditation or certification, or recognize accreditation bodies, have begun to impose certain criteria to demonstrate independence. For example, the Department of Education developed criteria for educational accrediting bodies that are designed to ensure that those organizations granting accreditation are not improperly influenced by related trade or membership associations. The mission statements of the Joint Commission and JCR both share the same phrase of seeking “to continuously improve the safety and quality of care.” While each organization differs in the activities it engages in to achieve that mission, they maintain a close relationship through both their governance structure and operations. The Joint Commission has substantial control over the governance of JCR through the powers retained by the Joint Commission in JCR’s bylaws as well as through the Joint Commission’s representation on JCR’s Board of Directors. In addition, JCR manages all Joint Commission publications and educational activities, while the Joint Commission provides various support services and some management oversight to JCR. The Joint Commission has substantial control over the governance of its affiliate, JCR. In 2003, the Joint Commission undertook a major review of the structural, operational, and legal aspects of its relationship with JCR in an effort to address any real or perceived conflict-of-interest issues. This review led to the restructuring of JCR through revisions to JCR’s bylaws, which govern the internal affairs of the organization, and resulted in changes to the composition of JCR’s board and the appointment of board officers. In particular, after the restructuring the Joint Commission no longer retained a majority on the JCR board through board members who served on the boards of both organizations. However, through changes to JCR’s bylaws, the Joint Commission maintained control over JCR by reserving powers that would otherwise have been exercised by JCR. The 2003 restructuring of JCR allowed the Joint Commission to effectively maintain control over JCR by implementing a change in the “corporate membership” of JCR. Similar to for-profit entities that may have stockholders, nonprofit corporations may have corporate members who, in general, are responsible for major organizational decisions, such as electing the corporation’s board. If a nonprofit corporation does not have any members, the corporation’s board of directors holds decision-making authority. With the restructuring of JCR, the Joint Commission became the “sole member” of JCR. The sole member has the ability to exercise substantial control over the affiliate through its “reserved powers”—powers that would otherwise be exercised by the affiliate board, if the sole member did not reserve them for itself. When the Joint Commission became the sole member of JCR, its reserved powers included those previously held and a number of additional powers, as shown in table 1. A practicing attorney with expertise in transactions involving nonprofit health care organizations and who has served as external counsel for the Joint Commission considers this structure necessary to enable the parent to protect itself from the possibility of the affiliate acting against the parent’s interests. However, an article published in a law journal cautions that this structure allows the parent to make decisions solely in its own interest without considering the impact on the affiliate. As part of the 2003 restructuring, the Joint Commission took steps to reduce the proportion of persons serving on the JCR board who also served as board members on the Joint Commission board. Prior to the 2003 restructuring, JCR’s board had 13 directors with a majority—7 directors—from the Joint Commission, including the President of the Joint Commission as an ex officio director with voting rights. The other 6 directors were from outside the Joint Commission, and included the CEO of JCR as an ex officio director with voting rights. After the 2003 restructuring, directors from the Joint Commission no longer comprised the majority of members on JCR’s board. There are 17 directors on JCR’s board, consisting of 7 Joint Commission directors—including the President of the Joint Commission as an ex officio director with voting rights—and 9 external directors who cannot be, either concurrently or within the prior 3 years, Joint Commission commissioners or employees. The President/CEO of JCR also serves on the JCR board, serving as a voting ex officio director. (See fig. 2.) Directors we interviewed who serve on both the Joint Commission and JCR boards said that serving on the two boards has not been problematic because both organizations share the same mission. However, they also recognized the potential for overlapping board members to be faced with competing organizational interests if differences between the Joint Commission and JCR arise. These directors noted that, if competing organizational interests were to occur, the Joint Commission’s reserve powers would dictate the final decision. The restructuring also affected the appointment of JCR officers. Prior to the restructuring, the President and the Chief Financial Officer (CFO) of the Joint Commission also served in those same positions for JCR. The CEO of JCR was appointed by, and reported to, the President of the Joint Commission, and could only appoint other JCR officers after consulting with the Joint Commission’s President. Changes to JCR’s bylaws through the 2003 restructuring removed the requirement that the Joint Commission’s President and CFO serve in those positions for JCR. Rather, the Joint Commission appoints and has the power to remove the President/CEO of JCR. The President/CEO of JCR also now has the authority to appoint officers, such as the CFO, without consulting with the Joint Commission’s President. In addition, the Joint Commission, rather than JCR’s board, now appoints the vice chairman of JCR’s board. One other noteworthy change as a result of the 2003 restructuring dealt with the role of two Joint Commission board committees in relation to JCR and the creation of a new JCR board committee. The Joint Commission created a Governance Committee, which has a number of responsibilities involving JCR, such as nominating JCR board directors and certain officers. This committee also has oversight responsibility for JCR governance issues and JCR conflict-of-interest policies, and reviews the bylaws and other documents of JCR. Further, the Joint Commission expanded the responsibilities of an existing committee—the Finance and Audit Committee—to include reviews of annual financial audits and other matters related to oversight of the firewall between the Joint Commission and JCR. Within the JCR board, a Firewall Oversight Committee was created as a result of the restructuring. This committee is charged with monitoring compliance with the firewall and related policies. The structure of the Joint Commission and JCR allows the two organizations to provide certain operational assistance to one another. The Joint Commission provides support and management services to JCR. Through a January 2001 service agreement, the Joint Commission provides JCR with financial, legal, marketing and public relations, human resources, accounting (bookkeeping and payroll), information technology, and other support services such as office management and mail. JCR pays for these services through a management fee. The methodology used to determine the appropriate allocation of expenses varies by department. For some departments, the allocation is based upon JCR’s percentage of total revenues, whereas in other departments, the estimate is made using the amount of time spent doing work on behalf of JCR. Departments also vary in whether they include overhead costs in the allocation. Along with support services, the Joint Commission also provides management services to JCR through its General Counsel and Compliance Officer. For example, all JCR materials, including the publications it produces on behalf of the Joint Commission and materials produced for its own purposes, must be reviewed and approved by the Joint Commission’s General Counsel prior to issuance. The Compliance Officer, a position created by the Joint Commission in 2005, oversees compliance duties for both the Joint Commission and JCR. Among other duties, the Compliance Officer is responsible for implementing, providing training on, and monitoring compliance with the firewall policies. The Compliance Officer reports directly to the President of the Joint Commission and President/CEO of JCR, the Joint Commission’s Governance Committee, JCR’s Firewall Oversight Committee, and may also report to the full boards of both organizations. The Compliance Officer is aided by a Compliance Council, which was created in late 2005 and consists of members who represent multiple departments from both the Joint Commission and JCR. The Council works with the Compliance Officer to develop an annual work plan that focuses on areas of greatest risk, recommended training, auditing, and measures of the compliance program’s effectiveness. JCR also provides assistance to the Joint Commission, including publication and educational services. The Joint Commission transferred its publications and educational product lines to JCR in 2000 in order to combine support services within JCR and to allow for organizational separation between the Joint Commission’s evaluation and accreditation function and the consultation and educational services provided by JCR. JCR currently offers a variety of educational programs regarding Joint Commission accreditation, including seminars, e-learning opportunities, and audio, satellite, and video conferences. These programs cover a range of topics and include information on the Joint Commission standards and changes to those standards. JCR also publishes its own books on health care issues and periodicals on patient safety and quality improvement. The operational services the Joint Commission and JCR provide to one another result in a flow of funds between the two organizations. In exchange for the license to publish Joint Commission materials, JCR pays the Joint Commission a royalty fee that ranges from 4.75 to 9.5 percent on gross sales. JCR also annually transmits assets to the Joint Commission in excess of the amount needed to operate JCR’s business. The amount of the transfer is based on a formula that considers JCR’s cash, investments, and average operating expense. The Joint Commission and JCR have taken steps, primarily since 2003, designed to strengthen the firewall guidance initially developed in 1987, shortly after the creation of JCR. They have also further developed guidance addressing the relationship between the two organizations. In addition, they have made an effort to educate staff at both organizations on these matters and have enhanced monitoring of compliance with the firewall and related policies. The Joint Commission and JCR firewall polices were initially developed as guidelines in 1987. Relatively few changes were made to these guidelines until 2003, when they were extensively modified. In addition, since 2003, the Joint Commission and JCR have developed other policies and guidance designed to further strengthen the firewall between the two organizations. Since 1987, shortly after the creation of JCR, both the Joint Commission and JCR have operated under a set of firewall guidelines designed to prevent conflicts of interest between the Joint Commission’s accreditation activities and JCR’s consultative services. Between 1987 and 2003, the firewall guidelines were modified twice—once in 1992 and again in 1999— to reflect JCR’s name change and other issues related to JCR services. In 2003, the Joint Commission and JCR made extensive modifications to the guidelines, which were released to staff in the form of policies in 2004. (See app. III for a list of key policies, guidelines, and protocols.) These modifications stemmed from the Joint Commission’s review of its relationship with JCR following the passage of the Sarbanes-Oxley Act in 2002. According to senior staff from the Joint Commission and JCR, the revised firewall policies are not based on any specific model. However, they are a component of the two organizations’ joint compliance program, which was developed in part using the hospital compliance program guidelines issued by HHS’s Office of Inspector General. The stated purpose of both organizations’ firewall policies is “to eliminate any real or perceived conflict of interest” between the Joint Commission’s accreditation activities and JCR’s consulting services. Certain requirements in the firewall policies of the two organizations are very similar, such as a prohibition on accessing confidential facility-specific information from, or sharing any facility-specific information with, staff from the other organization. (See app. IV for more information on the contents of each organization’s firewall policies.) Joint Commission and JCR staff are also prohibited from suggesting that the use of JCR consulting services is necessary for, or will influence, Joint Commission accreditation decisions. In addition, staff and board members of both organizations are required to sign an annual statement signifying that they have read, and agree to comply with, the firewall policies. Of the 25 staff members we spoke with from the Joint Commission and JCR, all but 1 reported signing the required annual compliance statement and all but 4— 2 from the Joint Commission and 2 from JCR—were aware that the firewall policy required them to sign this statement on an annual basis. While both organizations’ firewall policies share similar requirements, each has certain provisions that focus specifically on the services offered by its own organization. For example, the Joint Commission’s firewall policy stipulates that Joint Commission staff will not seek or solicit information on whether or not a facility has used JCR consulting services. The Joint Commission policy also provides guidance on how Joint Commission staff should respond to requests for consulting services. For example, if a facility asks Joint Commission surveyors for advice on these services, they are required to direct the facility to an appropriate senior staff member in the Joint Commission’s central office. That senior staff member can provide limited information on JCR, including its services and the reason for its creation. JCR’s firewall policy limits, among other things, the language JCR can use to promote its services. It also requires that JCR’s consulting services staff be housed in separate facilities from Joint Commission staff and use separate telephone and computer systems. Most of the state hospital associations and hospitals we interviewed that use JCR’s consulting services were familiar with the firewall between the Joint Commission and JCR. Of the five state hospital associations we interviewed that participate in JCR’s CSR program, four said they were provided with information on the relationship between the Joint Commission and JCR or had been told by JCR staff about the firewall between the two organizations. Further, all five associations stated that JCR staff have never indicated that participation in the CSR program would affect the accreditation process, other than through the general improvements that are expected when using consulting services. Similarly, staff we interviewed at six hospitals that use JCR’s consulting services stated that there had been no indication from JCR consultants that the use of these services would influence their facility’s Joint Commission accreditation process. In addition to the recent changes to the firewall policies, the Joint Commission and JCR developed other policies and guidance beginning in 2003 that further address possible areas of risk to the firewall. JCR formalized protocols for its consultants in the field, which provide specific guidance related to their interaction with the Joint Commission staff. For example, if Joint Commission staff members arrive at a facility to conduct a survey when a JCR consultant is on site, the JCR consultant must leave the facility immediately. In 2003, JCR also developed a policy—referred to as the “scope limitations policy”—which is designed to clarify what services can be provided to Joint Commission-accredited facilities. The policy specifically prohibits JCR from providing certain consulting services to facilities after they have undergone a Joint Commission survey, including helping facilities challenge the Joint Commission’s accreditation decisions or findings, resolving Joint Commission deficiency findings, or preparing facilities that have been denied Joint Commission accreditation for future surveys. In 2004, the Joint Commission developed an additional policy reiterating the importance of the firewall for those Joint Commission employees— information technology and planning and financial affairs staff—who, through the service agreement between the two organizations, need, and are able, to access JCR financial or operational information. In addition to the firewall compliance statement all Joint Commission staff are required to sign, these particular staff members are required to sign a separate compliance statement associated with this specific policy. Also in 2004, JCR approved a formal firewall policy related to JCR marketing materials in an effort to ensure that JCR marketing materials contain no implication that purchasing its products or services will impact the Joint Commission accreditation process. Because JCR markets some products that it develops on the Joint Commission’s behalf—publications and educational services—as well as its consulting services, the marketing policy clarifies the language and logos that can be used on marketing materials for these different products. For example, while marketing materials for the Joint Commission accreditation manuals published by JCR can only carry the Joint Commission logo, JCR’s marketing materials promoting its consulting services carry only the JCR logo. In 2006, the Joint Commission and JCR published posters, which are displayed in Joint Commission and JCR meeting rooms, to govern meetings that involve staff from both organizations. These posters reiterate the organizations’ firewall policy requirements, in place since 1987, that facility-specific information should not be discussed at meetings that include staff from both organizations and such information cannot be included in materials prepared for those joint meetings. The posters also state that, if facility-specific information must be discussed for business purposes by staff from one organization, the staff from the other organization must leave the meeting. There are a number of occasions when Joint Commission and JCR staff interact during which these guidelines may be applicable. For example, both Joint Commission and JCR staff participate on internal interdepartmental teams designed to review Joint Commission programs and ensure they are valuable to health care organizations. Because these meetings include reviews of the programs’ publication and education services—services provided by JCR—JCR staff participate on these teams. Another area of interaction is through educational programs offered by JCR. These programs may include training by Joint Commission surveyors and central office staff and may take place at the Joint Commission’s headquarters. The Joint Commission and JCR have also developed a joint code of conduct and organization-specific conflict-of-interest policies that, while not focused exclusively on firewall issues, address aspects of the relationship between the two organizations and the independence of the accreditation process. In particular, the Joint Commission’s conflict-of- interest policy prohibits staff from providing accreditation-related consulting and prohibits survey staff from surveying facilities to which they provided consulting services during the previous 3 years. Similarly, JCR’s conflict-of-interest policy prohibits staff from providing external accreditation-related consulting services and prohibits JCR consultants from providing consulting services to any facility they may have surveyed in the past 3 years. The Joint Commission and JCR report providing ongoing training to ensure that staff understand the firewall and related policies. The organizations have also developed mechanisms, primarily since 2003, that allow staff to report possible firewall violations. Both organizations report monitoring compliance with these policies on an ongoing basis and, in 2005, underwent a joint external review of their implementation. The Joint Commission and JCR reported that both board and staff members receive training on the firewall and related policies—board members are trained when they join the board and staff are trained during new employee orientation. In addition, Joint Commission and JCR staff receive annual training on the firewall and related policies and procedures and are further reminded of these policies through periodic presentations at departmental staff meetings. As of June 2006, the organizations’ staff training did not include a testing component to measure how well staff understand the policies. However, most staff members and senior staff we spoke with at both organizations were aware of the firewall policies and were able to accurately describe their purpose. All but 1 of the 25 staff members we spoke with—13 with the Joint Commission and 12 with JCR—reported being familiar with these policies. In addition, all but 1 of the 24 staff members who were familiar with the firewall policies stated that the training and information they received made them sufficiently aware of the firewall and its appropriate implementation. None of the 25 staff members we spoke with were aware of cases in which staff from either organization had suggested that the use of JCR consulting services would influence Joint Commission accreditation. In addition to training sessions, staff members at the Joint Commission and JCR have access to information on the compliance program through an intranet Web site. This site includes copies of the organizations’ respective firewall policies and other compliance-related materials, as well as information on the role of the organizations’ joint Compliance Officer and Compliance Council. The firewall policies for both organizations require employees to report violations to their management, the Compliance Officer, or the Joint Commission General Counsel. In keeping with this requirement, senior Joint Commission and JCR management stated that they encourage employees to contact their supervisors or these other management officers if they are aware of possible violations or have questions on the firewall. Of the 24 staff members we interviewed at both organizations who were familiar with the firewall policies, 20 indicated that if they became aware of a violation, they would contact another staff member, such as their direct supervisor, division head, or the Compliance Officer. The Joint Commission and JCR have also developed a compliance hotline that allows staff to anonymously report any concerns related to compliance issues. While the firewall policies require employees to report violations to certain staff, this hotline offers another means of reporting possible firewall violations. From its inception in March 2005 through December 2005, the hotline received three calls, none of which involved a firewall violation. All 24 of the Joint Commission and JCR staff members we spoke with who were familiar with the firewall policies reported being aware of the compliance hotline. Of those staff members, 6 stated that they would contact the hotline if they became aware of a firewall violation. The Joint Commission and JCR staff report taking multiple steps to monitor implementation of, and compliance with, the firewall and related policies. The organizations have created the Compliance Officer position, the Compliance Council, and the JCR Firewall Oversight Committee, all of which have a role in monitoring compliance with the firewall and related policies. According to Joint Commission and JCR staff, the firewall policies have been monitored internally on an ongoing basis and are now subject to external reviews. The Joint Commission conducted an internal review in 2002, which was presented to the Joint Commission and JCR boards in 2003. The 2004 and 2005 firewall policies for both organizations called for an annual audit of the policy by the Joint Commission’s Office of Legal Affairs, but these audits were not conducted. According to senior Joint Commission staff, the Joint Commission determined that its legal department could not conduct a sufficient audit and that instead, the audits should be conducted by an external body with experience in this area. In 2005, the Joint Commission and JCR hired a consulting firm to conduct the first external review of the organizations’ firewall policies and related guidance. Following this review, in 2006, the requirement for an annual audit by the Office of Legal affairs was deleted and was replaced with a requirement for an annual review, the results of which are presented to the appropriate committees of each board. According to Joint Commission staff, the Joint Commission and JCR anticipate continuing to contract for an external review of the firewall on an annual basis. The external review conducted in 2005 did not identify any major violations of either organization’s firewall policy—violations that could potentially breach the integrity of the accreditation process. In its report, the consulting firm stated that the implementation of the firewall policies “represented a reasonable effort to prevent any behavior that could result in a breach of the integrity of the accreditation process.” However, because no guidelines or standards exist for this kind of review, the consulting firm did not certify that the firewall and related policies protected the integrity of the accreditation process. The external review did identify some minor violations of the firewall— defined as violations that resulted from the staff’s failure to completely follow operational procedures required by the policies, but which are not considered to potentially breach the integrity of the accreditation process. For example, at the time of the 2005 review, JCR publications and education staff housed in the Joint Commission offices had access to a Joint Commission shared network folder on a computer drive. While this shared folder could not be accessed by JCR consulting staff and Joint Commission surveyors used a separate network, the consulting firm recommended eliminating JCR staff access. The Joint Commission and JCR agreed with this and other recommendations made, and report taking steps to address the issues, including eliminating JCR’s access to Joint Commission computer systems. In addition to this external review, the Joint Commission reported that, throughout the year, the Compliance Officer monitors concerns and questions related to the firewall and related policies. Based on this analysis, the organizations review the policies to determine what, if any, changes need to be made to improve their clarity. In 2006, the Compliance Officer developed a list of commonly asked questions and answers, which was approved by the senior management of both organizations and released to staff. According to the Compliance Officer, when minor firewall violations are identified, each instance is reviewed to determine if it had any impact on the accreditation decision process and if it was due to a lack of understanding of the policies or was an intentional violation. She will then either provide clarification, counseling, or, if necessary, initiate disciplinary action, including possible dismissal, through the human resources department. As of July, 2006, no Joint Commission or JCR staff had been terminated as a result of violating the firewall policies. However, a senior staff member at the Joint Commission reported that staff have been terminated for violating the Joint Commission’s conflict-of-interest policies. This staff member noted that two of the organization’s surveyors had been fired for providing consulting services, although these services were not provided to facilities they had previously surveyed. Accreditation is a key mechanism to ensure the safety and quality of hospital services provided to Medicare beneficiaries and other members of the public. The Joint Commission’s role in accrediting the majority of hospitals participating in Medicare makes the issue of ensuring the independence of the Joint Commission’s accreditation process vitally important. Any threat to the independence of the accreditation process could undermine its ability to ensure the safety and quality of services provided to Medicare beneficiaries and the general public. The Joint Commission and JCR have taken steps to protect the Joint Commission’s accreditation process from influence by JCR’s consulting services by developing mechanisms to protect against the improper sharing of facility-specific information. However, the majority of these mechanisms, including the firewall and firewall-related policies, the compliance hotline, and the annual external review of the firewall, have either been developed or significantly revised within the past few years— primarily since 2003. The next step is for management of both organizations to assure that these mechanisms are sufficient to protect the integrity of the accreditation process. In addition, even with appropriate policies and procedures in place, it will take ongoing monitoring and a concerted effort on the part of the leadership of both organizations to ensure that these policies and procedures are appropriately implemented by both their board and staff members. We provided a draft of this report to the Joint Commission and CMS for comment. In its response, the Joint Commission agreed with our concluding observations, specifically that ensuring the independence of the accreditation process is vitally important. It indicated that the report accurately reflects its relationship with JCR, and emphasized that its highest priority is to preserve the integrity of the Joint Commission’s accreditation process. (The Joint Commission’s written comments are reprinted in app. V.) CMS did not comment on our findings or concluding observations. Both the Joint Commission and CMS provided us with technical comments, which we incorporated as appropriate. As we agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of this letter until 30 days after the date of this letter. At that time, we will send copies to the Administrator of CMS, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (312) 220-7600 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. We examined the relationship between the Joint Commission on Accreditation of Healthcare Organizations (Joint Commission) and Joint Commission Resources, Inc. (JCR) as it relates to the independence of the Joint Commissions’ hospital accreditation process from JCR’s hospital consulting services. To provide information on the governance structure and operations of the two organizations, we reviewed multiple documents, including organizational charts reflecting the organizations’ structure as of 2006, a service agreement signed in 2001 and still in effect as of 2006, Internal Revenue Service tax documents from calendar years 2001 through 2004, and agendas and minutes from board meetings of both organizations from 2003 through September 2006. We also interviewed the President of the Joint Commission and the President/Chief Executive Officer of JCR, as well as officers from the Joint Commission Board of Commissioners and the JCR Board of Directors. In addition, we interviewed senior staff at both organizations, including the organizations’ General Counsel, each organization’s Chief Financial Officer, and the Joint Commission’s Vice President for Human Resources. To describe the policies the Joint Commission and JCR have developed to prevent the improper sharing of facility-specific information, we reviewed Joint Commission and JCR documents, including current and past policies and guidance related, either directly or indirectly, to the firewall. We also examined training materials and reports from the compliance hotline contractor. We conducted interviews with senior staff from the Joint Commission and JCR. These senior staff included the shared Corporate Compliance and Privacy Officer, the Joint Commission’s Vice President of Accreditation Services, and the Executive Directors of JCR’s consulting services. In addition to interviews with senior staff, we selected a sample of 15 staff members at each organization to interview. These semistructured interviews were designed to collect information on Joint Commission and JCR staff members’ understanding of the firewall and related guidance, their training on this guidance, and their awareness of possible firewall violations. Our selection of staff members concentrated on those who were JCR consultants and Joint Commission staff conducting surveys or working in the areas of information technology, planning and financial affairs, and marketing. We considered these particular staff members more likely to be in a position to breach the firewall than other employees. We selected staff using random lists of JCR consultants, Joint Commission hospital surveyors, and employees from the information technology, planning and financial affairs, and marketing departments, as well as a random list of employees from all other areas at each organization. Selected staff were contacted by phone and e-mail. If, after three attempted phone calls and one e-mail, staff did not respond to our request for an interview we moved to the next staff member identified in our random selection. We were able to conduct a total of 25 interviews with Joint Commission and JCR staff. We were unable to arrange interviews with 2 Joint Commission surveyors and 3 JCR consultants. We excluded any Joint Commission survey staff who were not hospital surveyors, JCR staff who provided only international services, senior staff at both organizations who we had already interviewed, and Joint Commission staff acting as liaisons to our work. The information gathered from these interviews reflects the experience of these staff members and cannot be generalized to all Joint Commission or JCR staff. While the interviews provide information on staff awareness of the firewall policies and related guidance, as well as their awareness of possible firewall violations, they are not sufficient to determine if there have or have not been any firewall violations. We also conducted interviews with officials from a random sample of 5 of the 14 state hospital associations that participated in JCR’s Continuous Service Readiness (CSR) program as of May 2006, and with officials from 5 state hospital associations that do not participate in the CSR program. These interviews were designed to obtain information on the associations’ understanding of the relationship between the Joint Commission and JCR and how they perceived that their participation in the CSR program might impact their members’ Joint Commission accreditation. To select the sample for these interviews, we sorted the associations by census regions. We then selected a random sample of associations that participate in the CSR program and a random sample of those that do not from within each census region. We conducted semistructured interviews with each of the selected associations. One state hospital association did not respond to our request for an interview. In this case, we replaced that association with the next association in the same census region identified in our random selection. We also conducted interviews with officials from 6 hospitals that use JCR’s consulting services to learn more about their understanding of the relationship between JCR and the Joint Commission. To conduct these interviews, we determined the number of hospitals that had contracted with JCR for these services in calendar year 2005. JCR compiled a spreadsheet that contained e-mail addresses for JCR’s 2005 domestic hospital clients. We identified a random sample of JCR’s hospital clients and JCR sent these hospitals an e-mail asking them to contact us if they were willing to be interviewed. We selected our sample of approximately 10 percent of that population—80 facilities—using a randomly generated number list. This selection was done at the JCR offices and the e-mails were sent to hospital facilities under our supervision. Facilities were given 2 weeks to contact us to schedule interviews if they were interested. The information gathered from these interviews with JCR hospital clients and the interviews with state hospital associations reflects the experience of these particular facilities and state hospital associations and cannot be generalized to all JCR consulting clients. As part of our work, we also interviewed staff at the Department of Health and Human Services’ Centers for Medicare & Medicaid Services to obtain information on their oversight of the Joint Commission and other accreditation organizations. In addition, we interviewed officials from multiple organizations and reviewed documents to obtain background information on possible criteria or best practices related to the governance of nonprofit organizations, conflicts of interest, compliance programs, and independence standards. Those we interviewed included officials at Independent Sector—a coalition of charities, foundations, and corporate giving programs which focuses on strengthening these particular types of organizations—and the Hauser Center for Nonprofit Organizations—a research center at Harvard University focusing on the nonprofit sector. We also interviewed officials from federal agencies and organizations to obtain information on how they separate accreditation or certification programs from consulting services. Those we interviewed included representatives from the Department of Education, the Council on Higher Education Accreditation, and the National Organization for Competency Assurance. Because the Joint Commission’s status related to Medicare applies only to hospitals, our review was limited to information related to its accreditation of hospitals and services provided by JCR to hospitals. We did not conduct a review of the Joint Commission’s accreditation decision process. We also did not review information on other activities conducted by the Joint Commission or JCR that were not related to the relationship between the Joint Commission’s hospital accreditation process and JCR’s hospital consulting services. Further, we excluded Joint Commission International, a division of JCR that provides consulting and accreditation services to foreign health care facilities, from the scope of our work because these facilities are not eligible to participate in the Medicare program. We conducted our work from October 2005 to December 2006 in accordance with generally accepted government auditing standards. Appendix II: T the Organizations’ Relationship imeline of Key Developments in 2000: The Joint Commission trfer it edtion, publiction, nd continus survey rediness deprtment to JCR. The Joint Commission yl re mended to: Crete the Joint Commission’s Finnce nd Adit Committee, nd expnd it reponilitie to inclde receiving report from the JCR Firewll Overight Committee. Crete Governnce Committee. 2001: The Joint Commission nd JCR ign ervice greement throgh which the Joint Commission provide er of support ervice to JCR for gement fee. The poition of Corporte Complince nd Privcy Officer i creted. 2004: Operting gideline relted to the interction of the Joint Commission nd JCR re formlized as firewll policie. 1999: Joint Commission operting gideline relting to ctivitie of Joint Commission Rerce, Inc. i revied to reflect JCR’sme chnge. : The Joint Commission condct dy of the potentil impliction of the Sabane-Oxley Act of 2002 for the governnce of the Joint Commission nd JCR. A result the JCR yl re mended to: Expnd the rd from 13 director to 17, the mjority of whom do not erve on the Joint Commission rd (i.e., “externl director”). Allow the Joint Commission preident to erve only as voting director on JCR’s rd nd not as the preident of JCR. Form the Firewll Overight Committee of the JCR rd, compoed only of director who do not erve on the Joint Commission rd. Develop fidciry requirement relted to confidentility nd conflict of interet for Joint Commission commissioner nd JCR director. Sff ign the firt of the nnual complince tement. The Joint Commission develop the firewll policy for plnning nd finnciffir nd informtion technology ff nd complince tement for thi ff. JCR develop the mrketing firewll policy. 2005: Implementtion of the comined Joint Commission nd JCR complince hotline. A consulting firm condct n externl review of the Joint Commission nd JCR firewll nd firewll-relted policie. 2006: Joint Commission nd JCR develop comined meeting gideline for Joint Commission nd JCR. Additionl policie nd procedre re developed, inclding: The Joint Commission code of condct, which pplie to JCR ff. JCR’s cope limittion policy. Protocol for JCR ff. Initil mrketing gideline. Policie nd procedre on fidciry confidentility greement. Firewall policy for planning and financial affairs and information Reinforces the Joint Commission Firewall Policies and applies specifically to Planning and Financial Affairs and Information Technology Staff who provide support services to JCR. Prohibits involvement in activities that might constitute or be perceived to constitute a conflict of interest with the overall mission of the Joint Commission. Requires staff to abide by the Joint Commission’s firewall policy and prohibits the disclosure of confidential or proprietary information. Prohibits Joint Commission staff from providing accreditation-related consulting services. Prohibits Joint Commission staff from surveying facilities to which they provided consulting or related services during the previous 3 years. Designed to eliminate any real or perceived conflict of interest between the Joint Commission accreditation activities and JCR’s consulting services. Provides specific direction to JCR staff on their interaction with Joint Commission staff and services. This policy applies to all JCR staff. Prohibits involvement in activities that might constitute or be perceived to constitute a conflict of interest with the mission of JCR and the Joint Commission. Requires staff to abide by JCR’s firewall policy and prohibits the disclosure of confidential or proprietary information. Prohibits JCR staff, in most cases, from providing outside consulting services. Prohibits JCR consultants from providing consulting services to facilities they have surveyed in the past 3 years. Provides requirements for marketing strategies to protect the integrity of the Joint Commission accreditation process and ensure that materials contain no implication that purchasing products or services from JCR will impact accreditation decisions. Provides specific direction to JCR consultants in the field, including their interaction with the Joint Commission staff. Delineates certain consulting services that cannot be provided to Joint Commission-accredited organizations, including assistance in preparing challenges to accreditation decisions, resolving Joint Commission deficiency findings, preparing root- cause analysis for sentinel events, and preparing organizations that have been denied Joint Commission accreditation for future surveys. Combined Joint Commission and JCR Policies and Guidelines Guides conduct in meetings that include both Joint Commission and JCR staff, reiterating that organization-specific or nonpublic accreditation or survey process information should not be discussed and, if business needs dictate that organization- specific information be shared, stating that appropriate staff must excuse themselves. Provides guidance on standards for staff conduct and the confidentiality of information, including mechanisms in place to help staff report violations of the code of conduct. Staff may not seek or solicit information on whether or not a facility has used JCR and is not provided this information by the Joint Commission or JCR representatives. Staff may not suggest that the use of JCR consulting services is necessary to obtain or influence Joint Commission accreditation. Staff may not access confidential services are informed that the Joint Commission is not told that the facility used JCR’s services and a disclaimer to this effect is included in JCR contracts. participation in JCR’s Continuous Service Readiness program (CSR) is not considered in the accreditation process. facility-specific information from, or share facility-specific information with, the other organization. Participants in JCR’s CSR program are informed that Joint Commission survey teams are told that CSR participation is not considered in the accreditation process. Commission’s Historical File Room. Joint Commission surveyors are instructed that survey report forms may not include information on whether or not the surveyed organization has used JCR’s services. information about the application of the Joint Commission standards or accreditation procedures that is not already available, or will be made available promptly, to outside parties. JCR staff may not attend Joint with surveyors about specific facility accreditation decisions, may not in any way participate in the accreditation process as a representative of the facility, and may not discuss the choice of surveyors for particular facilities with the Joint Commission. Joint Commission Historical File Room staff to allow them to monitor access. Commission surveyor training and may not have access to surveyor educational tools not generally available to outside parties. All JCR promotional materials related to consulting services are reviewed by the Joint Commission Office of Legal Affairs. financial and operational information as part of their role in providing services to JCR may not disclose JCR organization- specific information to other Joint Commission staff. All staff must sign a compliance statement on an annual basis. The firewall policy is sent annually to separate offices, telephone numbers, and computer systems from the Joint Commission. JCR publishes the Joint Commission’s accreditation materials and supplies their educational services. These services are promoted in Joint Commission and JCR materials. Any reference in Joint Commission materials to JCR’s consulting services is generally limited to acknowledging JCR’s existence, its services, and the reason for its creation. all staff, and is referenced in each organization’s conflict-of-interest policies, which staff are also required to sign on an annual basis. The firewall policy is covered during new employee orientation and training. JCR promotional materials are limited to identifying JCR as a nonprofit affiliate of the Joint Commission and the separateness between accreditation decisions and JCR’s services should be identified. Staff must report any violation of their organization’s firewall policy to the Compliance Officer, the Joint Commission General Counsel, or their organization’s management. consulting services are referred to the Joint Commission’s central office. Staff at the central office will refer to the availability of JCR’s services, and will also emphasize the separateness of the Joint Commission’s accreditation process from JCR’s consulting services. An annual review is conducted to ensure appropriate separation between the Joint Commission accreditation activities and JCR consulting services and the results are presented to the relevant board committees. The Historical File Room is a secured space at the Joint Commission offices in Oakbrook Terrace, Illinois. Staff are required to sign compliance statements signifying that they have read, and agree to comply with, both the firewall policy and conflict-of-interest policy that apply to their specific organization. In addition to the person named above, Geraldine Redican-Bigott, Assistant Director; Emily Gamble Gardiner, Thomas Han, Kevin Milne, Daniel Ries, Janet Rosenblad, and Jessica Cobert Smith made key contributions to this report. | Hospitals must meet certain conditions of participation established by the Centers for Medicare & Medicaid Services (CMS) in order to receive Medicare payments. In 2003, most hospitals--over 80 percent--demonstrated compliance with most of these conditions through accreditation from the Joint Commission on Accreditation of Healthcare Organizations (Joint Commission). Established in 1986, Joint Commission Resources, Inc. (JCR), a nonprofit affiliate of the Joint Commission, provides consultative technical assistance services to hospitals. Both organizations acknowledge the need to ensure that JCR's services do not--and are not perceived to--affect the independence of the Joint Commission's accreditation process. GAO was asked to provide information on the relationship between the Joint Commission and JCR. This report describes (1) their organizational relationship, and (2) the significant steps they have taken to prevent the improper sharing of information, obtained through their accreditation and consulting activities, respectively, since JCR was established. GAO reviewed pertinent documents, including conflict-of-interest policies and information about the organizations' financial relationship, and interviewed staff and board members from both organizations, JCR clients, and CMS officials. The Joint Commission and JCR have a close relationship as demonstrated through their governance structure and operations. The Joint Commission has substantial control over JCR and the two organizations provide operational services to one another. For example, JCR manages all Joint Commission publications, while the Joint Commission provides support services to JCR. Despite the Joint Commission's control over JCR, the two organizations have taken steps designed to protect facility-specific information. In 1987, the organizations created a firewall--policies designed to establish a barrier between the organizations to prevent improper sharing of this information. For example, the firewall is intended to prevent JCR from sharing the names of hospital clients with the Joint Commission. Beginning in 2003, both organizations began taking steps intended to strengthen this firewall, such as enhancing monitoring of compliance. Ensuring the independence of the Joint Commission's accreditation process is vitally important. To prevent the improper sharing of facility-specific information, it would be prudent for the Joint Commission and JCR to continue to assess the firewall and other related mechanisms. The Joint Commission agreed with GAO's concluding observations. CMS did not comment on GAO's findings or concluding observations. Both provided technical comments, which we incorporated as appropriate. |
Congress established FHA in 1934 under the National Housing Act (Pub. L. No. 73-479) to broaden homeownership, protect and shore up lending institutions, and stimulate employment in the building industry. FHA’s single-family programs insure private lenders against losses from borrower defaults on mortgages that meet FHA criteria and that are made primarily to low-income, minority, and first-time homebuyers of properties with one to four housing units. In 2004, some 77.5 percent of FHA loans went to first- time homebuyers, and 35 percent of these loans went to minorities. FHA insures most of its single-family mortgages under its Mutual Mortgage Insurance Fund (MMI Fund), which is supported by borrowers’ insurance premiums. FHA insures a variety of mortgages that cover initial home purchases, construction and rehabilitation, and refinancing. Its primary program is Section 203(b), the agency’s standard product for single-family dwellings. As the mortgage industry has developed products such as adjustable-rate mortgages (ARM), FHA has followed suit and now insures ARMs on single- family properties. FHA insures a variety of refinancing products, including mortgages designed to promote energy efficiency. Finally, it insures specialty mortgages, such as the Hawaiian Home Lands mortgage, which enables eligible native Hawaiians to obtain insurance for a mortgage on a homestead lease granted by the Department of Hawaiian Home Lands. Despite the products it insures, the number of loans FHA insures each year has fallen dramatically since 2000, largely because lending for conventional mortgage products (i.e., mortgages with no federal insurance or guarantee) has grown much more rapidly since the late 1980s than mortgages insured by government entities such as FHA and the Department of Veterans Affairs. As conventional markets have grown, so has the private sector’s use of automated underwriting systems, which has streamlined the application process and allowed lenders to more quickly assess the risk of loans. FHA began approving specific automated underwriting systems for lenders in 1996 in an effort to streamline its manual underwriting process. When it began delegating underwriting tasks to approved lenders in the 1980s, lenders manually underwrote loans before submitting the loan applications and required documentation to an FHA field office for approval. Once automated underwriting systems for FHA lending came into use, “direct endorsement lenders” (i.e., lenders certified by HUD to underwrite loans and determine their eligibility for FHA mortgage insurance without obtaining prior review) could streamline the loan application process by bypassing some documentation requirements. According to FHA officials, automated underwriting has allowed FHA to reduce the amount of time needed to approve insurance for a loan from several days to 1 day. The key to automated underwriting is a mortgage scorecard algorithm that attempts to objectively measure the borrower’s risk of default quickly and efficiently by examining the data that has been entered into the system. To underwrite a loan, lenders first enter into the electronic system data such as application information and credit scores. A scorecard compares these data with specific underwriting criteria (e.g., cash reserves and credit requirements) using a mathematical formula. Because the scorecard electronically analyzes each variable, it can quickly predict the likelihood of default. According to FHA officials, this process not only reduces underwriting time but also decreases the amount of documentation needed to assess the borrower’s credit risk. Private mortgage insurers, such as United Guaranty and Mortgage Guaranty Insurance Corporation (MGIC), were among the first to develop mortgage scorecards in the early 1990s. Beginning in the mid-1990s, Freddie Mac and Fannie Mae began to create their own automated underwriting systems and scorecards to evaluate conventional loans for purchase. More specifically, Freddie Mac implemented its Loan Prospector automated underwriting and scorecard tool by 1996, and Fannie Mae implemented a similar tool, Desktop Underwriter, in 1997. Experience with these scorecards prompted Freddie Mac in 1998 and Fannie Mae in 1999 to develop versions of these scorecards for FHA that lenders first used to automatically underwrite FHA-insured loans. Both entities used performance data on FHA-insured loans as part of the loan data used to create the FHA versions of their scorecards. However, while FHA cooperated in the development of Freddie Mac’s and Fannie Mae’s scorecards for FHA-insured loans, they were nonetheless proprietary to those entities, and some important details (e.g., the weighting of the variables) were withheld from FHA. In addition, the two scorecards sometimes yielded contradictory results for the same borrower. As a result, FHA decided to replace the Loan Prospector and Desktop Underwriter scorecards and develop its own scorecard that would provide uniform outcomes. Between 1998 and 2004, FHA contracted with Unicon Research Corporation to develop TOTAL. Direct endorsement lenders now use TOTAL in conjunction with automated underwriting systems that meet FHA standards—Loan Prospector, Desktop Underwriter, and Countrywide Loan Underwriting Expert System (CLUES)—to determine the likelihood of default. Although TOTAL can determine the credit risk of a borrower, it does not reject a loan; FHA requires lenders to manually underwrite loans that are not accepted by TOTAL to determine if the loan should be accepted or rejected. FHA’s automated mortgage underwriting process starts at the time that the borrower meets with and submits information to the direct endorsement lender for loan prequalification (see fig.1). First, the direct endorsement lender enters the application variables, such as the applicant’s loan-to-value ratio (LTV) and debt, into the automated underwriting system. Second, the automated underwriting system electronically “pulls” the additional credit data required to score the loan, which includes any bankruptcy and foreclosure information and credit scores. Third, the automated underwriting system transmits the data to TOTAL, which evaluates the information and recommends whether the loan should be “referred” or “accepted.” A “refer” recommendation requires that the direct endorsement lender manually underwrite the loan. An “accept” recommendation means that the loan does not have to be manually underwritten to determine the borrower’s creditworthiness and, accordingly, that less documentation will be required to process it. For example, borrowers whose loans are accepted do not have to verify their employment history if they have already met certain conditions, such as providing confirmation of current employment. An accepted application must go through an additional series of credit checks, or overrides, to ensure that it meets all of FHA’s underwriting standards. If the loan does not pass the series of additional credit checks, the application can still be downgraded to a “refer” for manual underwriting. Once the loan is processed through the credit checks, the automated underwriting system then sends the decision in a feedback document that the lender uses to continue processing the loan application. FHA’s approach to developing TOTAL was generally reasonable, but some of the decisions made during the development process could ultimately limit the scorecard’s effectiveness. Like the private sector, FHA and its contractor followed an accepted process, using a variety of variables that took into account such items as credit history and economic conditions. As a result, TOTAL is similar to private sector scorecards. But TOTAL’s effectiveness could be limited by some of the choices that were made during the development process, including the fact that (1) the data FHA and its contractor used were 12 years old by the time TOTAL was implemented, (2) FHA has not developed policies and procedures for updating TOTAL, and (3) the benchmark analysis for determining TOTAL’s predictive capability may have been inadequate. Scorecards are typically developed and maintained using data with specific characteristics and an accepted modeling process. The data—such as, variables that reflect credit histories and loan information—are typically several years old and are drawn from samples of borrowers whose characteristics resemble those of the borrowers whom the scorecard will assess. The process used in the private sector to develop the scorecard itself typically has four components: identifying the variables that best predict the likelihood of default, choosing a scorecard model by conducting various tests, validating the scorecard to ensure that it is stable (i.e., consistently produces reasonable results), and determining the appropriate cut point for separating loans that will be accepted from those that will be referred for manual underwriting. Once the scorecard is complete, many private sector organizations plan for and conduct ongoing analyses and generate reports to monitor and update their scorecards. Analyses that help in updating scorecards include measuring changes in the population of borrowers, the quality of the portfolio, and the scorecard’s effectiveness. Organizations may conduct these analyses on a monthly and quarterly basis, and they may also supplement these analyses with more in-depth reviews. In developing TOTAL, FHA’s contractor Unicon followed the four-step process. First, it identified variables using data primarily for loans that FHA had endorsed (i.e., approved for mortgage insurance) in 1992. In 1998, when Unicon began developing TOTAL, FHA chose to use 1992 loan data, which would reflect the characteristics of FHA borrowers and be “seasoned,” or old enough, to provide a sufficient number of defaults that could be attributed to a borrower’s poor creditworthiness. The 1992 sample of endorsed loans included 9,867 loans that did not result in a claim default and 4,818 that did. Unicon tested the variables’ ability to predict claim default. Unicon determined that a number of variables, such as credit, LTV ratio, and cash reserves should be included in TOTAL. To determine the best type of credit variable for FHA’s purposes to include in TOTAL, Unicon and its subcontractor Fair Isaac Corporation used 1994 and 1996 credit data to test various credit models and confirm the results. These models included those that measured borrowers’ credit using only credit scores and more complex models that were based on individual credit characteristics rather than on a credit score. Based on this analysis, FHA decided that the standard FICO credit score was a reasonable credit variable to include in the scorecard. Second, Unicon tested various versions of statistical models suitable for developing scorecards. These were variations on two types of models, “logit” and “hazard.” Both models predict the probability of default based on predictive variables that are weighted according to their statistical importance, although the hazard model can predict default over multiple time periods. FHA officials stated that, based on Unicon’s analyses, both models’ predictive capability were about equal. FHA chose the logit model, claiming that it was easier to implement and that its estimates were easier to interpret. Third, Unicon tested the stability of the model by estimating it against a sample of loans from 1992 that had not been included in the original 1992 data. In addition, Unicon tested the model’s stability over time by checking whether the determinants of defaults occurring within 2 years were similar for the 1992 and 1994 application years. Both stability tests, according to documents provided by FHA, suggested that the model did not materially change over the 2-year period. In addition, FHA performed a benchmark analysis by comparing the performance of TOTAL with previously used scorecards—the FHA versions of Freddie Mac’s Loan Prospector and Fannie Mae’s Desktop Underwriter—to determine the model’s precision. According to documents provided by FHA, TOTAL slightly outperformed the other scorecards. Finally, FHA worked with Unicon, Freddie Mac, and Fannie Mae to determine a cut point for TOTAL that would enable the agency to quickly accept the majority of loan applications so that lenders could focus their manual underwriting on the marginal, potentially riskier borrowers. This cut point was based partly on a 1996 analysis that Freddie Mac, in consultation with FHA, conducted on the version of the Loan Prospector scorecard developed for FHA. According to HUD officials, it was also consistent with cut points that had previously been used before TOTAL was implemented. The current cut point allows the agency to accept 65 to 70 percent of the loan applications automatically and refer the remainder. In a 2001 report, a consulting firm—KPMG LLP—that reviewed documents relating to the development of TOTAL concluded that FHA adequately supported most of its development decisions. The report focused on the data used, the type of model selected, the determination of cut points, and FHA’s benchmark analysis. Although FHA and its contractor used a reasonable and generally accepted practice for developing TOTAL, some of the choices made during that process could affect FHA’s ability to maximize its use of the scorecard. By the time TOTAL was implemented in 2004, the loans in the development sample were 12 years old. Best practices call for scorecards to be based on data that are representative of the current mortgage market—specifically, relevant data that are no more than several years old. FHA officials told us that the relationship between TOTAL’s predictive variables and FHA borrowers’ tendency to default had not changed significantly since 1992 and that they believed the data were still useful. However, since 1992, significant changes have occurred in the mortgage industry that have affected the characteristics of those applying for FHA-insured loans. These changes include generally lower credit scores, increased use of down payment assistance, and new mortgage products that have allowed borrowers who would previously have needed an FHA-insured loan to seek conventional mortgages. As a result, the relationships between borrower and loan characteristics and the likelihood of default may also have changed. For example, the statistical relationship between the LTV ratio and the likelihood of default may be different for borrowers who receive down payment assistance than for those who do not. As noted earlier, when TOTAL was implemented in 2004, FHA officials believed that the 1992 loan sample used to develop the scorecard still provided an adequate basis for assessing new loan applications. The agency’s subsequent analyses of TOTAL using samples of FHA-insured loans throughout the 1990s indicate that, for years tested, the scorecard has performed consistently in separating loans that resulted in insurance claims from those that did not. As a result, HUD did not update TOTAL either before it was deployed or subsequently. However, best practices implemented by private entities and reflected in guidance from a bank regulator call for having formal policies to ensure that scorecards are routinely updated. Frequent updating of scorecards ensures that they reflect changes in consumer behaviors and thus continue to accurately predict the likelihood of default. In September 2004, FHA awarded another contract to Unicon to, among other things, update TOTAL by 2007. In addition, HUD indicated that, through its contractors, it has the capacity to update TOTAL should the need arise and has contracts for acquiring credit data to support an update of the scorecard. However, FHA has not developed policies and procedures for updating TOTAL on a regular basis. Another potential shortcoming that could affect TOTAL’s effectiveness is the fact that FHA used only endorsed loans to develop TOTAL. Because the data did not cover all of the possible outcomes of applying for a loan (rejection, for example), the results could be biased. Therefore, TOTAL will likely assess a population of applications with generally poorer overall credit quality than the original population used to develop the scorecard and thus may not be as effective in evaluating applicants with poorer credit. In addition, because the sample of loans that was used to develop TOTAL differed from the total population of loan applications, the selection and weighting of the variables in the scorecard could be less than optimal. For the riskier applications, the predictive variables and associated weightings might differ from those TOTAL currently uses. FHA officials stated that, at the time TOTAL was being developed, they did not have another choice in the data used. However, updating TOTAL using information on marginal loans that were referred by the scorecard, but ultimately endorsed for FHA insurance, could help mitigate the bias problem. Similarly, using cut points that were based only on endorsed loans at the time TOTAL was developed—in this case, loans that were originated using the Loan Prospector scorecard—could mean that a higher percentage of loans that are likely to default would be accepted rather than referred for manual underwriting. That is, a sample of endorsed loans does not include loans that have been rejected and thus does not represent the total population of loans. As previously noted, the current cut point allows FHA to accept 65 to 70 percent of the total population of loan applications and that percentage could include riskier loans—riskier loans that the sample did not represent because they were referred by Loan Prospector and ultimately rejected. Furthermore, because FHA’s selection of cut points was not based on analysis of loans accepted by TOTAL, but rather on loans accepted by Loan Prospector, the cut points may prove to be less useful for FHA as it attempts to manage and understand its risk. KPMG LLP—the consulting firm that reviewed TOTAL’s development in 2001—raised similar concerns. We also found that, similar to the sample of loans used to develop TOTAL, the sample FHA used to perform the 1996 benchmark analysis of TOTAL consisted only of endorsed loans, rather than a broader sample that included the riskiest loans. Partly because other loan data were not readily available, Unicon benchmarked TOTAL against a sample of loans originated using the Loan Prospector scorecard. This sample consisted primarily of loans that had been accepted by the scorecard and endorsed for FHA insurance. However, because all models perform slightly differently (i.e., each scorecard will mistakenly accept certain high-risk, or “bad” loans), using a prescreened sample of loans could limit the accuracy of the benchmark analysis. The potential effect on the benchmark analysis was to suggest that TOTAL outperformed Loan Prospector. However, using a sample of loans that had not been prescreened by Loan Prospector might have yielded somewhat different results that would have more accurately represented TOTAL’s predictive capabilities. While TOTAL includes many of the variables included in other mortgage scoring systems, it does not include a number of important variables included in other systems. For example, the systems used by Fannie Mae and Freddie Mac may assign higher risks to adjustable rate loans than to fixed-rate loans. ARMs are generally considered to be higher risk than otherwise comparable fixed-rate mortgages, because borrowers are subject to higher payments if interest rates rise. Further, other scoring systems often include indicators for property type (single-family detached, two- to four-unit, or condominiums, for example). FHA indicated that these variables were not included in TOTAL because the risk associated with them did not differ significantly in the 1992 data used to estimate the model. However, the 1992 data set was fairly small—fewer than 15,000 loans—and only about 16 percent of it consisted of ARMs. In addition, condominiums and multiunit properties are a small component of FHA’s business. The modeling effort may have failed to find significant effects for these variables simply because of the small numbers of loans with these characteristics in the development sample. Previous research by FHA contractors on larger samples of FHA loans found that ARMs from this period were riskier than comparable fixed-rate mortgages. The fact that FHA’s scoring system does not consider the extra risk inherent in ARMs or distinguish between different types of properties, while competitors’ systems do, could have important consequences. If marginal applications that are ARMs or multiunit properties are rejected by competitors’ systems, but accepted by FHA’s, then FHA’s share of these riskier loans may increase. Finally, FHA does not include the source of the down payment in its scorecard. However, research by HUD contractors, HUD’s Inspector General, and us have all identified the source of a down payment as an important indicator of risk, and the use of down payment assistance in the FHA program has grown rapidly over the last 5 years. For example, as we reported in November 2005, FHA-insured loans with down payment assistance have higher delinquency and insurance claim rates than do similar loans without such assistance. FHA chose a logit rather than a hazard model as a basis for TOTAL and, therefore, potentially limited the variety of uses to which the scorecard can be put. While a logit model predicts the probability of default for a specific point in time, a hazard model, as previously noted, predicts the probability of default over multiple time periods. Because a hazard model captures the dynamic between time and loan performance, HUD could use it to project cash flows over time and estimate profitability. In addition, a hazard model more readily accepts and analyzes recent data, and FHA could update a scorecard developed from this model with recent origination data as often as it needs. Moreover, with a relatively current scorecard, FHA could monitor market changes and TOTAL’s effectiveness at predicting defaults in the current climate. Despite the added capabilities of a hazard model, FHA officials stated that the logit model was sufficient for TOTAL’s intended purpose because TOTAL was only intended to be used to rank order applications for FHA-insured loans based on the likelihood of default. FHA uses TOTAL Scorecard in much the same way as its two earlier scorecards—to inform underwriting standards and assess loan applications against those standards. TOTAL has produced more consistent underwriting results and, for some lenders, has streamlined the approval process and reduced paperwork. Private sector organizations use their scorecards more broadly, relying on them to assess risk, help launch new products, and broaden their customer base, as well as updating them regularly. FHA could realize similar types of benefits from TOTAL to help the agency serve low- and moderate-income borrowers while ensuring its financial soundness. In addition, the credit data used by TOTAL could help to improve the transparency of the secondary market for FHA-insured loans. FHA used TOTAL to test variables and identify the most predictive ones, which the agency then used to inform its underwriting standards. Therefore, TOTAL enables FHA to adjust its underwriting standards, if needed, based on analyses of current market conditions—something that Desktop Underwriter and Loan Prospector did not readily allow because FHA did not have direct access to them. In addition, FHA directs lenders to use TOTAL to assess loan applications by entering information that corresponds to certain variables.As with the previous scorecards, the only lenders that can directly interface with TOTAL and input loan application data into the scorecard via automated underwriting systems are direct endorsement lenders. Direct endorsement lenders can assess most FHA loan products with TOTAL (see app. II). As described in table 1, FHA’s current use of TOTAL has provided additional benefits over previous scorecards, such as less paperwork for lenders and more consistent underwriting decisions. Loan Prospector and Desktop Underwriter had, among other things, helped speed up the application process and provided an opportunity to base approvals on objectively determined variables. TOTAL continues these benefits and, in addition, has generated two others. First, as noted earlier, the previous scorecards did not always provide consistent underwriting decisions—that is, at times the results of their assessments differed, which resulted in the same loan being accepted by one scorecard and referred by the other. As a result, certain loans had to be approved manually, through potentially subjective decision making. TOTAL limits the number of loans that need to be approved manually because it provides consistent automatic underwriting decisions. Second, lenders that use TOTAL do not have to provide as much documentation for the accepted loans they underwrite as lenders that do not use TOTAL. For example, these lenders do not have to obtain or submit verification of rent, and the requirements for proof of income employment and assets are less stringent. As noted earlier, the key to successfully using a scorecard is ensuring that it is updated so that it can provide accurate and useful information. Updated scorecards can provide a number of benefits because of the variety of potential uses. Private sector organizations we spoke with said that their scorecards had produced the same benefits as TOTAL, including reducing loan origination times, and enhancing consistency and objectivity in the underwriting process. In addition, private sector organizations use their scorecards to help inform general management decision making, set prices based on risk, and launch new products. To inform general management decision making, private sector organizations compare the scorecards’ actual results with its predictions to, for example, set cut points and redirect underwriting resources from relatively low-risk cases to more marginal borrowers. To set risk-based prices, private sector organizations use scorecards to rank the relative risk of borrowers and price products according to that ranking. For instance, mortgage insurers may use FICO scores as a basis for reducing insurance premiums for low-risk borrowers. Finally, to help launch new products, these lenders may use scorecards to balance risk and compensating factors. For example, a product with a more flexible LTV could be offered to borrowers with characteristics such as a strong credit history. As a result of these uses, private lenders have been able to broaden their customer base and improve their financial performance. Expanding their product offerings based on a greater understanding of risk allows lenders to broaden their customer base. Lenders told us that their scorecards had allowed them to underwrite some borrowers who would have been rejected using manual underwriting and to develop products to better serve borrowers who were at a greater risk of default. One official noted that the scorecard had provided a greater understanding of the individual borrower’s risk and that, as a result, borrowers who would previously have been considered for subprime loans were now rated at a higher level of eligibility. In addition, lenders reported being able to reduce personnel costs because the organizations were writing fewer loans manually. Ultimately, these lenders said that they were able to maximize their profits because of the streamlining and cost reductions the scorecards provided. FHA could see additional benefits from TOTAL if it implemented some private sector practices. By routinely monitoring and updating TOTAL, for instance, FHA could better anticipate, understand, and react to changes in the marketplace. FHA could also exercise more control over its financial condition by using the scorecard to help (1) project estimated insurance claims and adjust cut points and (2) institute its proposal for risk-based pricing of the agency’s mortgage insurance products. FHA could also use TOTAL to aid its efforts to develop new products for underserved borrowers. FHA could better anticipate, understand, and react to changes in the marketplace if, like the private sector, it routinely updated TOTAL. Updating the scorecard as new data become available could help ensure that changes in consumer behavior are reflected in the model, which can be affected by changes in products and other trends. By routinely comparing the scorecard’s actual results to its predictions, FHA could ascertain whether TOTAL was effectively predicting default risk and make any necessary changes to the variables. In addition, FHA could use TOTAL to more accurately determine the performance of new loans, which HUD currently monitors on an ad hoc basis, to inform policy discussions on the creation and revision of FHA products. FHA could exercise more control over its financial condition, specifically its credit subsidy costs and financial soundness, by using the scorecard’s default predictions to project estimated claims and adjust cut points if necessary. In order to project estimated insurance claims, FHA would need to combine the variables’ weights estimated in the scorecard development process with projections of interest and house price appreciation rates, as is done in FHA’s actuarial studies. Based on its projections, FHA could then determine how much risk it could or should tolerate and make adjustments, if necessary, to the cut points and thus to the numbers and types of loans it automatically accepted and referred for manual underwriting. For example, if FHA raised the cut point, TOTAL would accept fewer high-risk loans (i.e., loans more likely to result in an insurance claim), thereby lowering FHA’s claim rate. Conversely, by lowering the cut point, TOTAL would accept more high-risk loans, and the agency would experience a higher claim rate. TOTAL could also aid HUD’s efforts to implement risk-based pricing of its mortgage insurance products. In its fiscal year 2007 budget submission, HUD proposed legislation that would allow the agency to replace its current insurance premium structure, where most borrowers pay the same premium regardless of their default risk, to a risk-based structure where borrowers would pay higher or lower premiums depending on their default risk. HUD believes that risk-based pricing would allow the agency to charge more competitive mortgage insurance premiums, attract and retain relatively low-risk borrowers, and exercise more control over its credit subsidy costs. HUD plans to set premiums based on an assessment of borrowers’ credit histories, LTVs, and debt-to-income ratios. However, it has not fully explored the potential of using TOTAL—especially a version that includes additional variables, such as down payment assistance— which is capable of evaluating risk in a more comprehensive way, for this purpose. In its budget submissions for fiscal years 2006 and 2007, HUD also proposed legislative changes that would allow FHA to develop new mortgage insurance products for low- and moderate-income borrowers (loans with lower down payment requirements, for example). HUD believes that its traditional customers would be better served by these new products than some of the high-cost, nonprime products offered in the conventional market. To the extent that FHA develops these products, it could use TOTAL to help identify alternatives that it previously may have believed posed too much risk, given the expected profit, when its lenders manually underwrote loans. HUD’s Ginnie Mae—which guarantees the timely payment of principal and interest on securities issued by private institutions and backed by pools of federally insured or guaranteed mortgage loans—could benefit from the credit data used by TOTAL. As we reported in October 2005, Ginnie Mae has taken steps to disclose more information to investors about the FHA- insured loans that back the securities it guarantees. However, unlike many conventional securitizers, Ginnie Mae does not disclose credit information—for example, summarized credit score data—for its loan pools. Disclosing such information is important because investors can use it to more accurately model prepayment rates. According to a Ginnie Mae official, prior to the implementation of TOTAL in 2004, the credit scores associated with FHA-insured loans were not available within HUD. Because borrowers’ credit scores are used by TOTAL, Ginnie Mae has expressed interest in obtaining this information and summarizing it for investors. Although FHA has helped to provide financing for nearly 33 million properties, its share of the single-family market has steadily decreased over time. Many of these potential borrowers—typically, first-time homebuyers with minimal cash for down payments and lower than average credit scores—may have been lost to conventional lenders. These lenders have been, in part, able to provide conventional mortgages to these borrowers with the increased use of scorecards—the evaluative component of automated underwriting systems—that have enabled them to target the traditional FHA borrower that poses the least amount of risk. If that is the case, the effect on FHA is that it has started to serve more high-risk borrowers. To enhance its understanding of risk posed by its borrowers, FHA has adopted automated underwriting and developed its own scorecard. FHA followed an accepted process in developing TOTAL and has already seen significant benefits from the scorecard. Because TOTAL has the same types of capabilities as private sector scorecards, FHA has the option to use and benefit from TOTAL in many different ways as do private sector organizations. Specifically, FHA could use TOTAL to help compete in the marketplace, manage risk, and serve its mission for borrowers. TOTAL’s capabilities are important to FHA, in part, because as it begins to insure more inherently risky loans, such as loans with down payment assistance, it needs to understand the risks they pose to the FHA insurance fund and manage those risks. However, the potential benefits of TOTAL cannot be realized without ensuring that TOTAL is regularly updated and exploring additional uses of TOTAL. For example, by not developing and implementing policies and procedures for rountinely updating TOTAL, it may become less reliable and, therefore, less effective at predicting defaults. In addition, as a result of not exploring additional uses of TOTAL, FHA will not receive all of the types of benefits seen by private sector organizations. These additional uses include applying TOTAL to proposed initiatives—such as risk-based pricing and the development of new products—which may help strengthen the FHA insurance fund and reach additional borrowers. Finally, FHA has not taken steps to share credit scores utilized by TOTAL with Ginnie Mae, which could use the information to help improve the transparency of the secondary mortgage market. To improve how HUD uses and benefits from TOTAL, we recommend that the Secretary of HUD take the following two actions: develop policies and procedures for updating TOTAL on a regular basis, including using updated data, testing additional variables, exploring hazard model benefits, and testing other cut points; and explore additional uses of TOTAL and the credit data it utilizes, including to help adjust cut points, implement risk-based pricing, develop new products, and enable Ginnie Mae to disclose more information about securities backed by FHA-insured loans. We provided HUD with a draft of this report for review and comment. HUD provided comments in a letter from the Assistant Secretary for Housing- Federal Housing Commissioner (see app. III). HUD made two general observations about the report and provided specific comments on our recommendations. First, HUD said the report did not convey the fact that developing TOTAL was a HUD initiative to modernize its processes and improve its delivery to business partners. Our draft report did discuss HUD’s rationale for implementing TOTAL and the scorecards that preceded it. It also discussed the benefits of these scorecards to FHA lenders, including less paperwork and quicker approval of mortgage insurance. However, in response to HUD’s comments, we added language to the report that further describes HUD’s motivation for developing TOTAL. Second, HUD said that TOTAL was working exactly as envisioned (i.e., segregating loans requiring limited underwriting and documentation from those requiring a full review by an individual underwriter) and that the draft report presented no evidence that the scorecard had failed to perform as expected. HUD also indicated that the agency had provided us with information and analysis based on FHA loan data from the 1990s, showing that TOTAL performed well in separating loans that resulted in insurance claims from those that did not. Our draft report did not state or intend to suggest that TOTAL was not fulfilling its intended function or was not working as well as expected. In fact, the report pointed out that TOTAL had continued the benefits of previous scorecards while generating others. At the same time, our draft report identified opportunities for HUD to improve TOTAL so that it could become a more effective tool for assessing and managing risk. For example, HUD could improve TOTAL by updating it to reflect recent changes in the mortgage market, such as the substantial growth in the percentage of FHA-insured loans with down payment assistance. HUD did not explicitly agree or disagree with our recommendation that it should develop policies and procedures for updating TOTAL, including using updated data, testing additional variables, exploring hazard model benefits, and testing other cut points. HUD indicated that it was taking steps to address some aspects of our recommendation but not others, as follows: HUD said that it had a formal plan for updating TOTAL, access to TOTAL’s development and implementation contractors to accommodate updates should the need arise, and contracts for acquiring credit data to support an update of the scorecard. As our draft report discussed, HUD had a contract to update TOTAL by 2007. However, best practices implemented by private entities and reflected in guidance from a bank regulator call for having formal policies to ensure that scorecards are routinely updated. HUD’s current plan calls for one update to be completed by 2007 (7 years after HUD finalized the scorecard model) and has no provision for subsequent updates. Accordingly, we continue to believe that HUD should develop policies and procedures for updating TOTAL on a regular basis. HUD acknowledged that it had used 1992 data to develop TOTAL but stated that the data spanned a wide range of credit scores and application factors represented in greater or lesser numbers in later cohorts of loans. We disagree that the 1992 loan data sufficiently represents later cohorts of loans and thus continue to believe that HUD should use more current loan data to update TOTAL. As our draft report stated, significant changes have occurred in the mortgage industry since 1992 that have affected the characteristics of those applying for FHA- insured loans. These changes include generally lower credit scores, increased use of down payment assistance, and new mortgage products that have allowed borrowers who would have previously needed an FHA-insured loan to seek conventional mortgages. HUD said that in developing TOTAL, the agency and Unicon tested all the available variables and included those that were empirically important, consistent with Equal Credit Opportunity Act (ECOA) regulations (which, among other things, set forth rules for evaluating credit applications). HUD also said that it intends to re-analyze all available variables, including, as our draft report suggested, the source and amount of down payment assistance. We agree that HUD should re- analyze all available variables and incorporate them into TOTAL, consistent with ECOA requirements. Our draft report stated that HUD’s analysis of certain variables, such as loan and property type, may not have found significant effects simply because of the small numbers of loans in HUD’s sample that were ARMs or were for condominiums or multiunit properties. HUD could conduct future analyses with greater statistical reliability if it were to use larger samples of loans, as major private lending organizations do. HUD stated that because TOTAL was designed to assess the creditworthiness of borrowers, the logit model was sufficient for that purpose. However, HUD also acknowledged that a hazard model could be used for the purposes enumerated in our draft report. Accordingly, we continue to believe that HUD should explore the benefits of a hazard model. HUD said that it did not rely solely on a 1992 sample of loans in setting a cut point for TOTAL and that it worked with Unicon, Fannie Mae, and Freddie Mac, using recent distributions of loans, to obtain a cut point that was consistent with the ones already in use for FHA lending. Our draft report did not state that HUD relied solely on a 1992 sample of loans. Rather, it indicated that the cut point was based partly on a 1996 analysis that Freddie Mac performed in consultation with FHA. However, in response to this comment, we added additional language to the report describing how HUD determined the cut point. HUD did not address the fundamental issue raised in our draft report—that the limitations of its original analysis suggest that the agency should test additional cut points. We continue to believe that HUD should test other cut points based on analysis of loans accepted by TOTAL. HUD did not explicitly agree with our recommendation that it should explore additional uses of TOTAL, such as using it to help adjust cut points, implement risk-based pricing, develop new products, and enable Ginnie Mae to disclose more information about securities backed by FHA-insured loans. However, the actions HUD said it plans to take are consistent with our recommendation. Specifically, HUD said that while TOTAL was not intended for risk-based pricing, the agency planned to explore how TOTAL might be used for that purpose. HUD stated that it planned to determine the benefits that TOTAL could present in developing new products, if given the authority from Congress. HUD said that it was exploring the legal ramifications of giving Ginnie Mae the credit scores obtained using TOTAL. HUD also provided a technical correction, which we addressed in our final report, concerning how it stores these credit scores. Finally, HUD stated that the draft report contained several errors and that these errors had been previously pointed out in meetings with us. Where appropriate, we made technical corrections and clarifications in response to HUD’s written comments and comments provided by a HUD official at a March 2006 meeting to discuss our findings. However, we found that many of these comments, rather than correcting any errors, merely provided additional levels of detail that were unnecessary for the purpose of this report. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the date of this letter. At that time, we will send copies to the Chairman and Ranking Member of the Senate Committee on Banking, Housing, and Urban Affairs; the Chairman and Ranking Member of the House Committee on Financial Services; and the Ranking Member of the Subcommittee on Housing and Community Opportunity. We also will send copies to the Secretary of Housing and Urban Development and other interested parties and make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. To assess the reasonableness of the Federal Housing Administration’s (FHA) approach to developing Technology Open to Approved Lenders (TOTAL), we reviewed agency documents and interviewed the Department of Housing and Urban Development (HUD) and contractor officials to determine (1) the process and data used to develop TOTAL, including how FHA identified and evaluated scorecard variables; (2) the reliability of the analysis used to evaluate TOTAL’s effectiveness in predicting defaults; and (3) how FHA established policies on cut points and overrides. In addition, we reviewed industry literature and interviewed private sector officials from large (based on volume) lending and private mortgage insurance organizations to determine the extent to which FHA’s development of TOTAL is consistent with private sector practices. To assess the benefits to FHA of expanding its use of TOTAL, we reviewed existing research on the uses and benefits of scorecards and interviewed private sector companies, academics, and HUD officials about these issues. We also determined how FHA and lenders use TOTAL by reviewing relevant agency guidance and reports and interviewing FHA officials and private lenders. In doing this work, we looked for any ways that FHA and lenders are using TOTAL differently than the scorecards TOTAL replaced. We compared FHA’s use of TOTAL with the private sector’s use of scorecards and determined whether FHA could benefit from any private sector practices that it has not already adopted. We also identified any opportunities that may exist for FHA to share information with other HUD offices that could benefit from TOTAL. We conducted our work in Washington, D.C., between April 2005 and February 2006 in accordance with generally accepted government auditing standards. In addition to the individual named above, Steve Westley, Assistant Director; Triana Bash; Austin Kelly; Mamesho MacCaulay; John McGrail; Mitch Rachlis; Rachel Seid; and Grant Turner made key contributions to this report. | Along with private mortgage providers, the Department of Housing and Urban Development's (HUD) Federal Housing Administration (FHA) has been impacted by technological advances that began in the mid-1990s and that have significantly affected the way the mortgage industry works. As a result, in 2004, FHA implemented Technology Open to Approved Lenders (TOTAL) Scorecard--an automated tool that evaluates the majority of new loans insured by FHA. However, questions have emerged about the effectiveness of TOTAL. Given these concerns, you asked GAO to evaluate the way the agency developed and uses this new tool. This report looks at (1) the reasonableness of FHA's approach to developing TOTAL and (2) the potential benefits to HUD of expanding its use of TOTAL. Some of the choices that FHA made during the development process could limit TOTAL's effectiveness, although overall the process was reasonable. Like the private sector, FHA and its contractor used many of the same variables, as well as an accepted modeling process, to develop TOTAL. However, the data that FHA and its contractors used to develop TOTAL were 12 years old by the time FHA implemented the scorecard, and the market has changed significantly since then. Also, FHA, among other things, (1) did not develop a formal plan for updating TOTAL on a regular basis; (2) did not include all the important variables that could help explain expected loan performance; and (3) selected a type of model that limits how the scorecard can be used. Despite potential problems with TOTAL, HUD could still see added benefits from it. As a result of TOTAL, FHA lenders and borrowers have seen two new benefits--less paperwork and more consistent underwriting decisions. However, FHA could gain additional benefits if, like private lenders and mortgage insurers, it put TOTAL to other uses. These uses include relying on TOTAL to help inform general management decision making, price products based on risk, and launch new products. Adopting these scorecard uses from the private sector could potentially generate three other benefits for FHA, including the ability to react to changes in the market, more control over its financial condition, and a broader customer base. Additionally, HUD's Government National Mortgage Association, a government corporation that guarantees securities of federally insured or guaranteed mortgage loans, could use credit scores that are used by TOTAL to help improve the transparency of the secondary mortgage market. |
Air cargo ranges in size from 1 pound to several tons, and in type from perishables to machinery, and can include items such as electronic equipment, automobile parts, clothing, medical supplies, other dry goods, fresh cut flowers, fresh seafood, fresh produce, tropical fish, and human remains. Cargo can be shipped in various forms, including large containers known as unit loading devices that allow many packages to be consolidated into one container that can be loaded onto an aircraft, wooden crates, assembled pallets, or individually wrapped/boxed pieces, known as break bulk cargo. Participants in the air cargo shipping process include shippers, such as individuals and manufacturers; indirect air carriers, also referred to as freight forwarders; air cargo handling agents who process and load cargo onto aircraft on behalf of air carriers; and air carriers that store, load, and transport cargo. A shipper may also send freight by directly packaging and delivering it to an air carrier’s ticket counter or sorting center where either the air carrier or a cargo handling agent will sort and load cargo onto the aircraft. According to TSA’s Air Cargo Strategic Plan, issued in November 2003, the agency’s mission for the air cargo program is to secure the air cargo transportation system while not unduly impeding the flow of commerce. TSA’s responsibilities for securing air cargo include, among other things, establishing security requirements governing domestic and foreign passenger air carriers that transport cargo, and domestic freight forwarders. TSA is also responsible for overseeing the implementation of air cargo security requirements by air carriers and freight forwarders through compliance inspections, and, in coordination with the Department of Homeland Security’s (DHS) Science and Technology (S&T) Directorate, for conducting research and development of air cargo security technologies. Air carriers are responsible for implementing TSA security requirements, predominantly through a TSA-approved security program that describes the security policies, procedures, and systems the air carrier will implement and maintain to comply with TSA security requirements. These requirements include measures related to the acceptance, handling, and screening of cargo; training of employees in security and cargo screening procedures; testing employee proficiency in cargo screening; and access to cargo areas and aircraft. If threat information or events indicate that additional security measures are needed to secure the aviation sector, TSA may issue revised or new security requirements in the form of security directives or emergency amendments applicable to domestic or foreign air carriers. Air carriers must implement the requirements set forth in the security directives or emergency amendments in addition to those requirements already imposed and enforced by TSA. DHS’s U.S. Customs and Border Protection (CBP) has primary responsibility for preventing terrorists and implements of terrorism from entering the United States. Specifically, CBP screens inbound air cargo upon its arrival in the United States to ensure that cargo entering the country complies with applicable laws and does not pose a security risk. CBP’s efforts include analyzing information on cargo shipments to identify high-risk cargo arriving in the United States that may contain terrorists or weapons of mass destruction, commonly known as targeting, and physically screening this cargo upon its arrival. Air carriers use several methods and technologies to screen cargo. These currently include manual physical searches and the use of approved technology, such as X-ray systems; explosives trace detection systems; decompression chambers; explosive detection systems (EDS); and certified explosives detection canine teams. Under TSA’s security requirements for domestic and inbound cargo, passenger air carriers are currently required to randomly screen a specific percentage of nonexempt cargo pieces listed on each airway bill. As of October 2006, domestic freight forwarders are also required, under certain conditions, to screen a certain percentage of cargo prior to its consolidation. TSA does not regulate foreign freight forwarders, or individuals or businesses that have their cargo shipped by air to the United States. DHS has taken some steps to develop and test technologies for screening and securing air cargo, but has not yet completed assessments of the technologies TSA plans to approve for use as part of the CCSP. According to TSA officials, there is no single technology capable of efficiently and effectively screening all types of air cargo for the full range of potential terrorist threats, including explosives and weapons of mass destruction. We reported in October 2005, and again in April 2007, that TSA, working with DHS’s S&T Directorate, was developing and pilot testing a number of technologies to screen and secure air cargo with minimal impact on the flow of commerce. DHS officials stated that once the department determines which technologies it will approve for use with domestic air cargo, it will consider the use of these technologies for enhancing the security of inbound cargo shipments. These pilot programs seek to enhance the security of cargo by improving the effectiveness of air cargo screening through increased detection rates and reduced false alarm rates, while addressing the two primary threats to air cargo identified by TSA— hijackers on an all-cargo aircraft and explosives on passenger aircraft. A description of these pilot programs and their status is included in table 1. Although TSA is moving forward with its plans to implement a system to screen 100 percent of cargo transported on passenger aircraft, the agency has not completed all of its assessments of air cargo screening technologies. According to TSA officials, the results of its technology tests will need to be analyzed before the agency determines which technologies will be certified for screening cargo, and whether it will require air carriers and other CCSP participants to use such technology. Although TSA has not completed all of its pilot programs or set time frames for completing all of them, TSA is planning on allowing CCSFs to use explosives trace detection, explosive detection system (EDS), X-ray, and other technology under CCSP for screening cargo. Without all of the results of its pilot programs or a time frame for their completion, however, TSA cannot be assured that the technologies the agency plans to approve for screening cargo as part of the CCSP are effective. GAO will likely review this issue as part of our planned review of TSA’s efforts to meet the requirement to screen 100 percent of cargo transported on passenger aircraft. According to TSA officials, tamper-evident/resistant security seals will be essential for ensuring that cargo screened under the CCSP has not been tampered with during transport from the CCSF to the air carrier. Officials noted that the agency recognizes that the weakest link in the transportation of air cargo is the chain of custody to and from the various entities that handle and screen cargo shipments prior to its loading onto an aircraft. Officials stated that the agency has taken steps to analyze the chain of custody of cargo under the CCSP, and is drafting a security program that will address all entities involved in the transportation and screening of cargo under the CCSP to ensure that the chain of custody of the cargo is secure. However, as of July 2008, TSA officials stated that the agency is not conducting a pilot program to test tamper-evident/resistant security seals. Therefore, the effectiveness of security seals to effectively prevent cargo shipments from tampering is unknown. GAO will likely review this issue as part of our planned review of TSA’s efforts to meet the requirement to screen 100 percent of cargo transported on passenger aircraft. In addition, we reported in April 2007 that several air carriers we met with were using large X-ray machines at facilities abroad to screen entire pallets of cargo transported on passenger aircraft. These machines allow for cargo on pallets to undergo X-ray screening without requiring the pallet to be broken down. We also noted that CBP uses this technology to screen inbound air cargo once it enters the United States. TSA officials recently stated that the agency planned to pilot test large X-ray machines, identifying that large X-ray machines could be used to screen certain types of cargo that are currently exempt from TSA’s screening requirements, as part of the agency’s efforts to screen 100 percent of cargo transported on passenger aircraft. TSA officials stated that the agency plans to evaluate this equipment beginning late 2008 as part of its CCSP pilot program and to complete the evaluation at the conclusion of the CCSP pilot in August 2010. In addition, as part of the agency’s plans to screen 100 percent of cargo transported on passenger aircraft, TSA is taking steps to expand the use of TSA-certified explosives detection canine teams to screen cargo before it is placed onto passenger aircraft. In 2004, TSA conducted a pilot program that determined that canine teams had an acceptable rate of detecting explosives in an air cargo environment, even when the teams were not specifically trained in this area. TSA is in the process of adding 170 canine teams to support aviation security efforts, of which 85 will be primarily used to screen air cargo. These teams are to be primarily located at the 20 airports that receive approximately 65 percent of all air cargo transported within the United States. TSA officials, however, could not identify whether the additional 85 canine teams will meet the agency’s increasing screening needs as part of its efforts to screen 100 percent of such cargo, thus raising questions regarding the future success of the CCSP. According to TSA officials, the federal government and the air cargo industry face several challenges that must be overcome to effectively implement any of these technologies to screen or secure cargo. These challenges include factors such as the nature, type and size of cargo to be screened; environmental and climatic conditions that could impact the functionality of screening equipment; low screening throughput rates; staffing and training issues for individuals who screen cargo; the location of air cargo facilities; employee health and safety concerns, such as worker exposure to radiation; and the cost and availability of screening technologies. As TSA takes steps to implement the CCSP, it will be critical for the agency to address these challenges to ensure the effectiveness of the program. As TSA proceeds from piloting to implementing the CCSP, the issue of who purchases the technologies to support the program will have to be resolved. Specifically, TSA officials stated that under the CCSP, certified facilities and air carriers will be responsible for purchasing equipment to screen cargo. Officials noted that many air carriers already have screening equipment in place at their facilities to support this screening, and stated that TSA will reimburse CCSFs for the cost of the equipment, such as EDS, for up to $375,000 per facility as long as these entities continue to meet security requirements established by TSA. The CCSF, however, will be responsible for maintaining the screening equipment and purchasing new equipment in the future. In addition, CCSFs will be required to train their staff to operate the equipment using TSA’s training standards. Air cargo industry stakeholders have already raised concerns regarding the cost of purchasing and maintaining screening equipment to support the CCSP. According to some industry estimates, the cost of purchasing air cargo screening equipment will be much more than the $375,000 TSA plans to reimburse each CCSP participant. In addition, the air cargo industry has expressed concern regarding the costs associated with training those individuals who will be operating the air cargo screening equipment. TSA plans to revise and eliminate current exemptions for some categories of cargo, thereby reducing the percentage of cargo transported on passenger aircraft that is subject to alternative methods of screening. These changes will go into effect in early 2009. However, according to agency officials, TSA made these determinations based on a limited number of vulnerability assessments, as well as professional judgment. In February 2008, TSA issued a report assessing existing screening exemptions for certain kinds of cargo transported on passenger aircraft and evaluated the risk of maintaining those exemptions. As part of its assessment, TSA officials stated that they considered and determined the threat to and vulnerability of the exempted cargo types. TSA officials also stated they based their determinations on which screening exemptions to revise, maintain or eliminate in part on results from air cargo vulnerability assessments at Category X airports they completed in accordance with law. TSA has completed assessments at 6 of the 27 Category X airports. Absent the completed assessments, which could help to identify potential security vulnerabilities associated with the exemptions, TSA does not have complete information with which to make risk-based decisions regarding the security of air cargo. TSA officials have acknowledged the importance of completing air cargo vulnerability assessments and stated that they will complete them by the end of 2009. Officials further stated that as the agency conducts additional air cargo vulnerability assessments, they will assess the results to determine whether existing screening exemptions should be revised, maintained or eliminated. To ensure that existing air cargo security requirements are being implemented as required, TSA inspects air carriers and freight forwarders that transport cargo. Under the CCSP, TSA will also have to inspect other entities, such as shippers, who volunteer to participate in the program. These compliance inspections range from an annual comprehensive review of the implementation of all air cargo security requirements to a more frequent review of at least one security requirement by an air carrier or freight forwarder. In October 2005, we reported that TSA had conducted compliance inspections on less than half (49 percent) of the estimated 10,000 freight forwarder facilities nationwide, and of those freight forwarders they had inspected, the agency found violations in over 40 percent of them. We also reported that TSA had not determined what constitutes an acceptable level of performance related to compliance inspections, or compared air carriers’ and freight forwarders’ performance against this standard; analyzed the results of inspections to systematically target future inspections on those entities that pose a higher security risk to the domestic air cargo system; or assessed the effectiveness of its enforcement actions taken against air carriers and freight forwarders to ensure that they are complying with air cargo security requirements. We recommended that TSA develop a plan for systematically analyzing and using the results of air cargo compliance inspections to target future inspections and identify systemwide corrective actions. We also recommended that TSA assess the effectiveness of enforcement actions in ensuring air carrier and freight forwarder compliance with air cargo security requirements. TSA officials stated that, since our report was issued, the agency has increased the number of inspectors dedicated to conducting domestic air cargo compliance inspections. Officials also told us that TSA has begun analyzing compliance inspection results to prioritize their inspections on those entities that have the highest rates of noncompliance, as well as newly approved freight forwarders and air carriers that have yet to be inspected. However, in recent discussions with TSA officials regarding their plans to implement the CCSP, they stated that there may not be enough compliance inspectors to conduct compliance inspections of all the entities that could be a part of the CCSP, which TSA officials told us could number in the thousands, once the program is fully implemented by August 2010. As a result, TSA is anticipating requesting an additional 150 cargo Transportation Security Inspectors for fiscal year 2010 to supplement its existing allocation of 450 Transportation Security Inspectors. However, TSA officials stated that they have not formally assessed the number of Transportation Security Inspectors the agency will need. Without such an assessment, TSA may not be able to ensure that entities involved in the CCSP are meeting TSA requirements to screen and secure cargo. GAO will likely review this issue as part of our planned review of TSA’s efforts to meet the requirement to screen 100 percent of cargo transported on passenger aircraft. We reported in April 2007 that more work remains in order for TSA to strengthen the security of inbound cargo. As previously stated, TSA is currently taking steps to develop a system of screening 100 percent of domestic and outbound cargo transported on passenger aircraft. TSA does not, however, currently plan to include inbound cargo as part of this system. TSA officials acknowledge that vulnerabilities to inbound cargo exist, but stated that each foreign country has its own security procedures for flights coming into the United States, and further stated that TSA does not impose its security requirements on foreign countries. According to TSA, it will continue to work with other countries to encourage the adoption of uniform measures for screening cargo flights bound for the United States as it enhances its requirements for screening cargo originating in the United States. TSA has begun working with foreign governments to develop uniform air cargo security standards and to mutually recognize each other’s security standards, referred to as harmonization. We reported, however, that duplicative air cargo security standards exist, which can impede the flow of commerce, expose air cargo shipments to security risk, and damage high-value items. For example, to meet TSA requirements, passenger air carriers transporting cargo into the United States must screen a certain percentage of nonexempt cargo shipments, even though these shipments may have already been screened by a foreign government. Air carrier representatives stated that meeting TSA screening requirements is problematic in certain foreign countries because air carriers are not permitted to rescreen cargo shipments that have already been screened by foreign government employees and deemed secure. These conflicts and duplication of effort could potentially be avoided through harmonization. According to TSA officials, pursuing harmonization would improve the security of inbound cargo and assist TSA in performing its mission. For example, officials stated that the harmonization of air cargo security standards would provide a level of security to those entities not currently regulated by the agency, such as foreign freight forwarders and shippers. However, achieving harmonization with foreign governments may be challenging because these efforts are voluntary and some foreign countries do not share the United States’ view regarding air cargo security threats and risks. Additionally, foreign countries may lack the resources or infrastructure needed to develop an air cargo security program as comprehensive as that of the United States. In April 2007, we recommended that TSA, in collaboration with foreign governments and the United States air cargo industry, systematically compile and analyze information on air cargo security practices used abroad to identify those that may strengthen TSA’s overall air cargo security program. TSA agreed with this recommendation and, since the issuance of our report, has reviewed the air cargo screening models of two foreign countries. According to TSA officials, this review led to the design of their proposed CCSP. Opportunities exist for TSA to further strengthen its screening efforts for inbound cargo in the following three key areas: Conducting air cargo vulnerability assessments for inbound cargo. As noted earlier, TSA is currently conducting air cargo vulnerability assessments at Category X airports, but is not including inbound cargo in these assessments. While TSA has plans to conduct vulnerability assessments as part of its risk-based approach to securing inbound cargo, the agency has not established a time frame for doing so. Such assessments could provide information on the potential vulnerabilities posed by the transport of inbound cargo. We reported in April 2007 that TSA officials stated that they would conduct vulnerability assessments of inbound cargo after they had assessed the vulnerability of domestic cargo. Nevertheless, TSA officials acknowledged that vulnerabilities to inbound cargo exist and that these vulnerabilities are in some cases similar to those facing the domestic and outbound air cargo supply chain. Assessing the vulnerability posed by maintaining screening exemptions for inbound air cargo. TSA has not assessed the potential vulnerabilities posed by inbound air cargo screening exemptions. In April 2007, we reported on the potential vulnerabilities associated with inbound air cargo screening exemptions. Specifically, we reported that screening exemptions could pose a risk to the inbound air cargo supply chain because TSA has limited information on the background of and security risks posed by foreign freight forwarders and shippers whose cargo may fall into one of the exemption categories. We recommended that TSA assess whether existing inbound air cargo screening exemptions pose an unacceptable vulnerability to the air cargo supply chain and if necessary, address these vulnerabilities. TSA agreed with this recommendation and noted that the agency had recently revised and eliminated domestic and outbound air cargo screening exemptions. However, TSA has yet to address our recommendation for assessing inbound air cargo screening exemptions. Updating TSA’s Air Cargo Strategic Plan to address inbound cargo. As part of TSA’s risk-based approach, TSA issued an Air Cargo Strategic Plan in November 2003 that focused on securing the domestic air cargo supply chain. However, in April 2007, we reported that this plan did not include goals and objectives for securing inbound cargo, which presents different security challenges than cargo transported domestically. To ensure that a comprehensive strategy for securing inbound cargo exists, we recommended that DHS develop a risk-based strategy to address inbound cargo security that should define TSA’s and CBP’s responsibilities for ensuring the security of inbound cargo. In response to our recommendation, CBP issued its International Air Cargo Security Strategic Plan in June 2007. While this plan identifies how CBP will partner with TSA, it does not specifically address TSA’s responsibilities in securing inbound cargo. According to TSA officials, the agency plans to revise its Air Cargo Strategic Plan in the fall of 2008, and will address TSA’s strategy for securing cargo from international last points of departure, as well as its collaborative efforts with CBP to secure this cargo. Ms. Chairwoman, this concludes my statement. I would be pleased to answer any questions that you or other members of the subcommittee may have at this time. For further information on this testimony, please contact Cathleen Berrick at (202) 512- 3404 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Steve D. Morris, Assistant Director; Lara Kaskie; Tom Lombardi; Meg Ullengren; and Margaret Vo. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The Implementing Recommendations of the 9/11 Commission Act of 2007 requires the Transportation Security Administration (TSA) to implement a system to physically screen 100 percent of cargo on passenger aircraft by August 2010. To fulfill these requirements, the Department of Homeland Security's (DHS) TSA is developing the Certified Cargo Screening Program (CCSP), which would allow the screening of cargo to occur prior to placement on an aircraft. This testimony addresses four challenges TSA may face in developing a system to screen 100 percent of cargo: (1) deploying effective technologies; (2) changing TSA air cargo screening exemptions; (3) allocating compliance inspection resources to oversee CCSP participants; and (4) securing cargo transported from a foreign nation to the United States. GAO's comments are based on GAO products issued from October 2005 through February 2008, including selected updates conducted in July 2008. DHS has taken steps to develop and test technologies for screening and securing air cargo; however, TSA has not completed assessments of the technologies it plans to use as part of the CCSP. TSA has reported that there are several challenges that must be overcome to effectively implement any of these technologies, including the nature, type, and size of cargo to be screened and the location of air cargo facilities. In addition, the air cargo industry voiced concern about the costs associated with purchasing the screening equipment. GAO will likely review this issue in future work. TSA plans to revise and eliminate screening exemptions for some categories of air cargo, thereby reducing the percentage of cargo transported on passenger aircraft that is subject to alternative methods of screening. However, TSA plans to continue to exempt some types of domestic and outbound cargo (cargo transported by air from the United States to a foreign location) after August 2010. TSA based its determination regarding the changing of exemptions on professional judgment and the results of air cargo vulnerability assessments. However, TSA has not completed all of its air cargo vulnerability assessments, which would further inform its efforts. TSA officials stated there may not be enough compliance inspectors to oversee implementation of the CCSP and is anticipating requesting an additional 150 inspectors for fiscal year 2010. They further stated that they have not formally assessed the number of inspectors the agency will need. Without such an assessment, TSA may not be able to ensure that CCSP entities are meeting TSA requirements to screen and secure cargo. To ensure that existing air cargo security requirements are being implemented as required, TSA conducts audits, referred to as compliance inspections, of air carriers that transport cargo. The compliance inspections range from a comprehensive review of the implementation of all security requirements to a review of at least one security requirement by an air carrier or freight forwarder (which consolidates cargo from many shippers and takes it to air carriers for transport). GAO reported in October 2005 that TSA had conducted compliance inspections on fewer than half of the estimated 10,000 freight forwarders nationwide and, of those, had found violations in over 40 percent of them. GAO also reported that TSA had not analyzed the results of compliance inspections to systematically target future inspections. GAO reported in April 2007 that more work remains for TSA to strengthen the security of cargo transported from a foreign nation to the United States, referred to as inbound air cargo. Although TSA is developing a system to screen 100 percent of domestic and outbound cargo, TSA officials stated that it does not plan to include inbound cargo because it does not impose its security requirements on foreign countries. TSA officials said that vulnerabilities to inbound air cargo exist and that these vulnerabilities are in some cases similar to those of domestic air cargo, but stated that each foreign country has its own security procedures for flights coming into the United States. |
Since the 1960s, geostationary and polar-orbiting operational environmental satellites have been used by the United States to provide meteorological data for weather observation, research, and forecasting. NOAA’s National Environmental Satellite Data and Information Service (NESDIS) is responsible for managing the existing civilian geostationary and polar-orbiting satellite systems as two separate programs, called the Geostationary Operational Environmental Satellites and the Polar Operational Environmental Satellites (POES), respectively. The Air Force is responsible for operating a second polar-orbiting environmental satellite system— the Defense Meteorological Satellite Program (DMSP). Polar-orbiting environmental satellites obtain environmental data that are processed to provide graphical weather images and specialized weather products. These satellite data are also the predominant input to numerical weather prediction models, which are a primary tool for forecasting weather 3 or more days in advance—including forecasting the path and intensity of hurricanes. The weather products and models are used to predict the potential impact of severe weather so that communities and emergency managers can help prevent and mitigate their effects. Polar satellites also provide data used to monitor environmental phenomena, such as ozone depletion and drought conditions, as well as data sets that are used by researchers for a variety of studies such as climate monitoring. Figure 1 illustrates the current operational polar satellite configuration consisting of two POES and two DMSP satellites. Unlike polar-orbiting satellites, which constantly circle the earth in a relatively low polar orbit, geostationary satellites can maintain a constant view of the earth from a high orbit of about 22,300 miles in space. NOAA operates GOES as a two-satellite system that is primarily focused on the United States (see fig. 2). These satellites are uniquely positioned to provide timely environmental data to meteorologists and their audiences on the earth’s atmosphere, its surface, cloud cover, and the space environment. They also observe the development of hazardous weather, such as hurricanes and severe thunderstorms, and track their movement and intensity to reduce or avoid major losses of property and life. Furthermore, the satellites’ ability to provide broad, continuously updated coverage of atmospheric conditions over land and oceans is important to NOAA’s weather forecasting operations. Satellite acquisition programs are often technically complex and risky undertakings, and as a result, they often experience technical problems, cost overruns, and schedule delays. We and others have reported on a historical pattern of repeated missteps in the procurement of major satellite systems, including NPOESS, the GOES I-M series, the Air Force’s Space Based Infrared System High Program (SBIRS-High), and the Air Force’s Advanced Extremely High Frequency Satellite System (AEHF). Table 1 lists key problems experienced with these programs. While each of the programs faced multiple problems, all of them experienced insufficient maturity of technologies, overly aggressive schedules, insufficient subcontract management, and inadequate system engineering capabilities for overseeing contractors. With the expectation that combining the POES and DMSP programs would reduce duplication and result in sizable cost savings, a May 1994 Presidential Decision Directive required NOAA and DOD to converge the two satellite programs into a single satellite program capable of satisfying both civilian and military requirements. The converged program, NPOESS, is considered critical to the United States’ ability to maintain the continuity of data required for weather forecasting and global climate monitoring through the year 2026. To manage this program, DOD, NOAA, and NASA formed a tri-agency Integrated Program Office, located within NOAA. Within the program office, each agency has the lead on certain activities: NOAA has overall program management responsibility for the converged system and for satellite operations; DOD has the lead on the acquisition; and NASA has primary responsibility for facilitating the development and incorporation of new technologies into the converged system. NOAA and DOD share the costs of funding NPOESS, while NASA funds specific technology projects and studies. The NPOESS program office is overseen by an Executive Committee, which is made up of the Administrators of NOAA and NASA and the Under Secretary of the Air Force. NPOESS is a major system acquisition that was originally estimated to cost about $6.5 billion over the 24-year life of the program from its inception in 1995 through 2018. The program was to provide satellite development, satellite launch and operation, and ground- based satellite data processing. When the NPOESS engineering, manufacturing, and development contract was awarded in August 2002, the estimated cost was $7 billion. Acquisition plans called for the procurement and launch of six satellites over the life of the program, as well as the integration of 13 instruments—consisting of 10 environmental sensors and 3 subsystems (see table 2). In addition, a demonstration satellite (called the NPOESS Preparatory Project or NPP) was planned to be launched several years before the first NPOESS satellite in order to reduce the risk associated with launching new sensor technologies and to ensure continuity of climate data with NASA’s Earth Observing System satellites. NPOESS Experienced Cost Increases, Schedule Delays, and Technical Problems over Several Years Over the last few years, NPOESS experienced continued cost increases and schedule delays, requiring difficult decisions to be made about the program’s direction and capabilities. In 2003, we reported that changes in the NPOESS funding stream led the program to develop a new program cost and schedule baseline. After this new baseline was completed in 2004, we reported that the program office increased the NPOESS cost estimate from about $7 billion to $8.1 billion, delaying key milestones, including the launch of the first satellite, and extending the life of the program until 2020. In mid-November 2005, we reported that NPOESS continued to experience problems in the development of a key sensor, resulting in schedule delays and anticipated cost increases. This was due in part, to problems at multiple levels of management—including subcontractor, contractor, program office, and executive leadership. Recognizing that the budget for the program was no longer executable, the NPOESS Executive Committee planned to make a decision in December 2005 on the future direction of the program— what would be delivered, at what cost, and by when. This involved deciding among options involving increased costs, delayed schedules, and reduced functionality. We noted that continued oversight, strong leadership, and timely decision making were more critical than ever, and we urged the committee to make a decision quickly so that the program could proceed. However, we subsequently reported that, in late November 2005, NPOESS cost growth exceeded a legislatively mandated threshold that requires DOD to certify the program to Congress. This placed any decision about the future direction of the program on hold until the certification took place in June 2006. In the meantime, the program office implemented an interim program plan for fiscal year 2006 to continue work on key sensors and other program elements using fiscal year 2006 funding. The Nunn-McCurdy law requires DOD to take specific actions when a major defense acquisition program exceeds certain cost increase thresholds. The law requires the Secretary of Defense to notify Congress when a major defense acquisition is expected to overrun its project baseline by 15 percent or more and to certify the program to Congress when it is expected to overrun its baseline by 25 percent or more. In late November 2005, NPOESS exceeded the 25 percent threshold, and DOD was required to certify the program. Certifying the program entailed providing a determination that (1) the program is essential to national security, (2) there are no alternatives to the program that will provide equal or greater military capability at less cost, (3) the new estimates of the program’s cost are reasonable, and (4) the management structure for the program is adequate to manage and control costs. DOD established tri-agency teams—made up of DOD, NOAA, and NASA experts—to work on each of the four elements of the certification process. In June 2006, DOD (with the agreement of both of its partner agencies) certified a restructured NPOESS program, estimated to cost $12.5 billion through 2026. This decision approved a cost increase of $4 billion over the prior approved baseline cost and delayed the launch of NPP and the first two satellites by roughly 3 to 5 years. The new program also entailed establishing a stronger program management structure, reducing the number of satellites to be produced and launched from 6 to 4, and reducing the number of instruments on the satellites from 13 to 9—consisting of 7 environmental sensors and 2 subsystems. It also entailed using NPOESS satellites in the early morning and afternoon orbits and relying on European satellites for midmorning orbit data. Table 3 summarizes the major program changes made under the Nunn- McCurdy certification decision. The Nunn-McCurdy certification decision established new milestones for the delivery of key program elements, including launching NPP by January 2010, launching the first NPOESS satellite (called C1) by January 2013, and launching the second NPOESS satellite (called C2) by January 2016. These revised milestones deviated from prior plans to have the first NPOESS satellite available to back up the final POES satellite should anything go wrong during that launch. Delaying the launch of the first NPOESS satellite means that if the final POES satellite fails on launch, satellite data users would need to rely on the existing constellation of environmental satellites until NPP data becomes available—almost 2 years later. Although NPP was not intended to be an operational asset, NASA agreed to move it to a different orbit so that its data would be available in the event of a premature failure of the final POES satellite. However, NPP will not provide all of the operational capability planned for the NPOESS spacecraft. If the health of the existing constellation of satellites diminishes—or if NPP data is not available, timely, and reliable— then there could be a gap in environmental satellite data. In order to reduce program complexity, the Nunn-McCurdy certification decision decreased the number of NPOESS sensors from 13 to 9 and reduced the functionality of 4 sensors. Specifically, of the 13 original sensors, 5 sensors remain unchanged, 3 were replaced with less capable sensors, 1 was modified to provide less functionality, and 4 were cancelled. Table 4 shows the changes to NPOESS sensors, including the 4 identified as critical sensors. The changes in NPOESS sensors affected the number and quality of the resulting weather and environmental products, called environmental data records or EDRs. In selecting sensors for the restructured program, the agencies placed the highest priority on continuing current operational weather capabilities and a lower priority on obtaining selected environmental and climate measuring capabilities. As a result, the revised NPOESS system has significantly less capability for providing global climate measures than was originally planned. Specifically, the number of EDRs was decreased from 55 to 39, of which 6 are of a reduced quality. The 39 EDRs that remain include cloud base height, land surface temperature, precipitation type and rate, and sea surface winds. The 16 EDRs that were removed include cloud particle size and distribution, sea surface height, net solar radiation at the top of the atmosphere, and products to depict the electric fields in the space environment. The 6 EDRs that are of a reduced quality include ozone profile, soil moisture, and multiple products depicting energy in the space environment. Since the June 2006 decision to revise the scope, cost, and schedule of the NPOESS program, the program office has made progress in restructuring the satellite acquisition; however, important tasks remain to be done. Restructuring a major acquisition program like NPOESS is a process that involves identifying time-critical and high- priority work and keeping this work moving forward, while reassessing development priorities, interdependencies, deliverables, risks, and costs. It also involves revising important acquisition documents including the memorandum of agreement on the roles and responsibilities of the three agencies, the acquisition strategy, the system engineering plan, the test and evaluation master plan, the integrated master schedule defining what needs to happen by when, and the acquisition program baseline. Specifically, the Nunn- McCurdy certification decision required the Secretaries of Defense and Commerce and the Administrator of NASA to sign a revised memorandum of agreement by August 6, 2006. It also required that the program office, Program Executive Officer, and the Executive Committee revise and approve key acquisition documents including the acquisition strategy and system engineering plan by September 1, 2006, in order to proceed with the restructuring. Once these are completed, the program office can proceed to negotiate with its prime contractor on a new program baseline defining what will be delivered, by when, and at what cost. The NPOESS program office has made progress in restructuring the acquisition. Specifically, the program office has established interim program plans guiding the contractor’s work activities in 2006 and 2007 and has made progress in implementing these plans. The program office and contractor also developed an integrated master schedule for the remainder of the program—beyond fiscal year 2007. This integrated master schedule details the steps leading up to launching NPP by September 2009, launching the first NPOESS satellite in January 2013, and launching the second NPOESS satellite in January 2016. Near-term steps include completing and testing the VIIRS, CrIS, and OMPS sensors; integrating these sensors with the NPP spacecraft and completing integration testing; completing the data processing system and integrating it with the command, control, and communications segment; and performing advanced acceptance testing of the overall system of systems for NPP. However, key steps remain for the acquisition restructuring to be completed. Although the program office made progress in revising key acquisition documents, including the system engineering plan, the test and evaluation master plan, and the acquisition strategy plan, it has not yet obtained the approval of the Secretaries of Commerce and Defense and the Administrator of NASA on the memorandum of agreement among the three agencies, nor has it obtained the approval of the NPOESS Executive Committee on the other key acquisition documents. As of June 2007, these approvals are over 9 months past due. Agency officials noted that the September 1, 2006, due date for the key acquisition documents was not realistic given the complexity of coordinating documents among three different agencies. Finalizing these documents is critical to ensuring interagency agreement and will allow the program office to move forward in completing other activities related to restructuring the program. These other activities include completing an integrated baseline review with the contractor to reach agreement on the schedule and work activities, and finalizing changes to the NPOESS development and production contract. Program costs are also likely to be adjusted during upcoming negotiations on contract changes—an event that the Program Director expects to occur in July 2007. Completion of these activities will allow the program office to lock down a new acquisition baseline cost and schedule. Until key acquisition documents are finalized and approved, the program faces increased risk that it will not be able to complete important restructuring activities in time to move forward in fiscal year 2008 with a new program baseline in place. This places the NPOESS program at risk of continued delays and future cost increases. The NPOESS program has made progress in establishing an effective management structure, but—almost a year after this structure was endorsed during the Nunn-McCurdy certification process—the Integrated Program Office still faces staffing problems. Over the past few years, we and others have raised concerns about management problems at all levels of the NPOESS program, including subcontractor and contractor management, program office management, and executive-level management. Two independent review teams also noted a shortage of skilled program staff, including budget analysts and system engineers. Since that time, the NPOESS program has made progress in establishing an effective management structure—including establishing a new organizational framework with increased oversight by program executives, instituting more frequent subcontractor, contractor, and program reviews, and effectively managing risks and performance. However, DOD’s plans for reassigning the Program Executive Officer in the summer of 2007 increase the program’s risks. Additionally, the program lacks a staffing process that clearly identifies staffing needs, gaps, and plans for filling those gaps. As a result, the program office has experienced delays in getting core management activities under way and lacks the staff it needs to execute day-to-day management activities. NPOESS Program Has Made Progress in Establishing an Effective Management Structure and Increasing Oversight Activities, but Executive Turnover Will Increase Program Risks The NPOESS program has made progress in establishing an effective management structure and increasing the frequency and intensity of its oversight activities. Over the past few years, we and others have raised concerns about management problems at all levels of management on the NPOESS program, including subcontractor and contractor management, program office management, and executive-level management. In response to recommendations made by two different independent review teams, the program office began exploring options in late 2005 and early 2006 for revising its management structure. In November 2005, the Executive Committee established and filled a Program Executive Officer position, senior to the NPOESS Program Director, to streamline decision making and to provide oversight to the program. This Program Executive Officer reports directly to the Executive Committee. Subsequently, the Program Executive Officer and the Program Director proposed a revised organizational framework that realigned division managers within the Integrated Program Office responsible for overseeing key elements of the acquisition and increased staffing in key areas. In June 2006, the Nunn-McCurdy certification decision approved this new management structure and the Integrated Program Office implemented it. Figure 3 provides an overview of the relationships among the Integrated Program Office, the Program Executive Office, and the Executive Committee, as well as key divisions within the program office. Operating under this new management structure, the program office implemented more rigorous and frequent subcontractor, contractor, and program reviews, improved visibility into risk management and mitigation activities, and institutionalized the use of earned value management techniques to monitor contractor performance. In addition to these program office activities, the Program Executive Officer implemented monthly program reviews and increased the frequency of contacts with the Executive Committee. The Program Executive Officer briefs the Executive Committee in monthly letters, apprising committee members of the program’s status, progress, risks, and earned value, and the Executive Committee now meets on a quarterly basis—whereas in the recent past, we reported that the Executive Committee had met only five times in 2 years. Although the NPOESS program has made progress in establishing an effective management structure, this progress is currently at risk. We recently reported that DOD space acquisitions are at increased risk due in part to frequent turnover in leadership positions, and we suggested that addressing this will require DOD to consider matching officials’ tenure with the development or delivery of a product. In March 2007, NPOESS program officials stated that DOD is planning to reassign the recently appointed Program Executive Officer in the summer 2007 as part of this executive’s natural career progression. As of June 2007, the Program Executive Officer has held this position for 19 months. Given that the program is currently still being restructured, and that there are significant challenges in being able to meet critical deadlines to ensure satellite data continuity, such a move adds unnecessary risk to an already risky program. The NPOESS program office has filled key vacancies but lacks a staffing process that identifies programwide staffing requirements and plans for filling those needed positions. Sound human capital management calls for establishing a process or plan for determining staffing requirements, identifying any gaps in staffing, and planning to fill critical staffing gaps. Program office staffing is especially important for NPOESS, given the acknowledgment by multiple independent review teams that staffing shortfalls contributed to past problems. Specifically, these review teams noted shortages in the number of system engineers needed to provide adequate oversight of subcontractor and contractor engineering activities and in the number of budget and cost analysts needed to assess contractor cost and earned value reports. To rectify this situation, the June 2006 certification decision directed the Program Director to take immediate actions to fill vacant positions at the program office with the approval of the Program Executive Officer. Since the June 2006 decision to revise NPOESS management structure, the program office has filled multiple critical positions, including a budget officer, a chief system engineer, an algorithm division chief, and a contracts director. In addition, on an ad hoc basis, individual division managers have assessed their needs and initiated plans to hire staff for key positions. However, the program office lacks a programwide process for identifying and filling all needed positions. As a result, division managers often wait months for critical positions to be filled. For example, in February 2006, the NPOESS program estimated that it needed to hire up to 10 new budget analysts. As of September 2006, none of these positions had been filled. As of April 2007, program officials estimated that they still needed to fill 5 budget analyst positions, 5 systems engineering positions, and 10 technical manager positions. The majority of the vacancies—4 of the 5 budget positions, 4 of the 5 systems engineering positions, and 8 of the 10 technical manager positions— are to be provided by NOAA. NOAA officials noted that each of these positions is in some stage of being filled—that is, recruitment packages are being developed or reviewed, vacancies are being advertised, or candidates are being interviewed, selected, and approved. The program office attributes its staffing delays to not having the right personnel in place to facilitate this process, and it did not even begin to develop a staffing process until November 2006. Program officials noted that the tri-agency nature of the program adds unusual layers of complexity to the hiring and administrative functions because each agency has its own hiring and performance management rules. In November 2006, the program office brought in an administrative officer who took the lead in pulling together the division managers’ individual assessments of needed staff and has been working with the division managers to refine this list. This new administrative officer plans to train division managers in how to assess their needs and to hire needed staff, and to develop a process by which evolving needs are identified and positions are filled. However, there is as yet no date set for establishing this basic programwide staffing process. As a result of the lack of a programwide staffing process, there has been an extended delay in determining what staff is needed and in bringing those staff on board; this has resulted in delays in performing core activities, such as establishing the program office’s cost estimate and bringing in needed contracting expertise. Additionally, until a programwide staffing process is in place, the program office risks not having the staff it needs to execute day-to-day management activities. In commenting on a draft of our report, Commerce stated that NOAA implemented an accelerated hiring model. More recently, the NPOESS program office reported that several critical positions were filled in April and May 2007. However, we have not yet evaluated NOAA’s accelerated hiring model and, as of June 2007, about 10 key positions remained to be filled. Major segments of the NPOESS program—the space segment and ground systems segment—are under development; however, significant problems have occurred and risks remain. The program office is aware of these risks and is working to mitigate them, but continued problems could affect the program’s overall cost and schedule. Given the tight time frames for completing key sensors, integrating them on the NPP spacecraft, and developing, testing, and deploying the ground-based data processing systems, it will be important for the NPOESS Integrated Program Office, the Program Executive Office, and the Executive Committee to continue to provide close oversight of milestones and risks. The space segment includes the sensors and the spacecraft. Four sensors are of critical importance—VIIRS, CrIS, OMPS, and ATMS— because they are to be launched on the NPP satellite in September 2009. Initiating work on another sensor, the Microwave imager/sounder, is also important because this new sensor— replacing the cancelled CMIS sensor—will need to be developed in time for the second NPOESS satellite launch. Over the past year, the program made progress on each of the sensors and the spacecraft. However, two sensors, VIIRS and CrIS, have experienced major problems. The status of each of the components of the space segment is described in table 5. Managing the risks associated with the development of VIIRS and CrIS is of particular importance because these components are to be demonstrated on the NPP satellite, currently scheduled for launch in September 2009. Any delay in the NPP launch date could affect the overall NPOESS program, because the success of the program depends on the lessons learned in data processing and system integration from the NPP satellite. Additionally, continued sensor problems could lead to higher final program costs. Development of the ground segment—which includes the interface data processing system, the ground stations that are to receive satellite data, and the ground-based command, control, and communications system—is under way and on track. However, important work pertaining to developing the algorithms that translate satellite data into weather products within the integrated data processing segment remains to be completed. Table 6 describes each of the components of the ground segment and identifies the status of each. Managing the risks associated with the development of the IDPS system is of particular importance because this system will be needed to process NPP data. Because of the importance of effectively managing the NPOESS program to ensure that there are no gaps in the continuity of critical weather and environmental observations, in our April 2007 report, we made recommendations to the Secretaries of Defense and Commerce and to the Administrator of NASA to ensure that the responsible executives within their respective organizations approve key acquisition documents, including the memorandum of agreement among the three agencies, the system engineering plan, the test and evaluation master plan, and the acquisition strategy, as quickly as possible but no later than April 30, 2007. We also recommended that the Secretary of Defense direct the Air Force to delay reassigning the recently appointed Program Executive Officer until all sensors have been delivered to the NPOESS Preparatory Program; these deliveries are currently scheduled to occur by July 2008. We also made two additional recommendations to the Secretary of Commerce to (1) develop and implement a written process for identifying and addressing human capital needs and for streamlining how the program handles the three different agencies’ administrative procedures and (2) establish a plan for immediately filling needed positions. In written comments, all three agencies agreed that it was important to finalize key acquisition documents in a timely manner, and DOD proposed extending the due dates for the documents to July 2, 2007. DOD subsequently extended the due dates to September and October 2007 and March 2008 in the case of the test and evaluation master plan. Because the NPOESS program office intends to complete contract negotiations in July 2007, we remain concerned that any further delays in approving the documents could delay contract negotiations and thus increase the risk to the program. In addition, the Department of Commerce agreed with our recommendation to develop and implement a written process for identifying and addressing human capital needs and to streamline how the program handles the three different agencies’ administrative procedures. The department also agreed with our recommendation to plan to immediately fill open positions at the NPOESS program office. Commerce noted that NOAA identified the skill sets needed for the program and has implemented an accelerated hiring model and schedule to fill all NOAA positions in the NPOESS program. Commerce also noted that NOAA has made NPOESS hiring a high priority and has documented a strategy— including milestones—to ensure that all NOAA positions are filled by June 2007. DOD did not concur with our recommendation to delay reassigning the Program Executive Officer, noting that the NPOESS System Program Director responsible for executing the acquisition program would remain in place for 4 years. The Department of Commerce also noted that the Program Executive Officer position is planned to rotate between the Air Force and NOAA. Commerce also stated that a selection would be made before the departure of the current Program Executive Officer to provide an overlap period to allow for knowledge transfer and ensure continuity. However, over the last few years, we and others (including an independent review team and the Commerce Inspector General) have reported that ineffective executive-level oversight helped foster the NPOESS program’s cost and schedule overruns. We remain concerned that reassigning the Program Executive at a time when NPOESS is still facing critical cost, schedule, and technical challenges will place the program at further risk. In addition, while it is important that the System Program Director remain in place to ensure continuity in executing the acquisition, this position does not ensure continuity in the functions of the Program Executive Officer. The current Program Executive Officer is experienced in providing oversight of the progress, issues, and challenges facing NPOESS and coordinating with Executive Committee members as well as the Defense acquisition authorities. Additionally, while the Program Executive Officer position is planned to rotate between agencies, the memorandum of agreement documenting this arrangement is still in draft and should be flexible enough to allow the current Program Executive Officer to remain until critical risks have been addressed. Further, while Commerce plans to allow a period of overlap between the selection of a new Program Executive Officer and the departure of the current one, time is running out. The current Program Executive Officer is expected to depart in early July 2007, and as of early July 2007, a successor has not yet been named. NPOESS is an extremely complex acquisition, involving three agencies, multiple contractors, and advanced technologies. There is not sufficient time to transfer knowledge and develop the sound professional working relationships that the new Program Executive Officer will need to succeed in that role. Thus, we remain convinced that given NPOESS current challenges, reassigning the current Program Executive Officer at this time is not appropriate. To provide continuous satellite coverage, NOAA acquires several satellites at a time as part of a series and launches new satellites every few years (see table 7). To date, NOAA has procured three series of GOES satellites and is planning to acquire a fourth series, called GOES-R. In 1970, NOAA initiated its original GOES program based on experimental geostationary satellites developed by NASA. While these satellites operated effectively for many years, they had technical limitations. For example, this series of satellites was “spin- stabilized,” meaning that the satellites slowly spun while in orbit to maintain a stable position with respect to the earth. As a result, the satellite viewed the earth only about 5 percent of the time and had to collect data very slowly, capturing one narrow band of data each time its field-of-view swung past the earth. A complete set of sounding data took 2 to 3 hours to collect. In 1985, NOAA and NASA began to procure a new generation of GOES, called the GOES I-M series, based on a set of requirements developed by NOAA’s National Weather Service, NESDIS, and NASA, among others. GOES I-M consisted of five satellites, GOES-8 through GOES-12, and was a significant improvement in technology from the original GOES satellites. For example, GOES I-M was “body-stabilized,” meaning that the satellite held a fixed position in orbit relative to the earth, thereby allowing for continuous meteorological observations. Instead of maintaining stability by spinning, the satellite would preserve its fixed position by continuously making small adjustments in the rotation of internal momentum wheels or by firing small thrusters to compensate for drift. These and other enhancements meant that the GOES I-M satellites would be able to collect significantly better quality data more quickly than the older series of satellites. In 1998, NOAA began the procurement of satellites to follow GOES I-M, called the GOES-N series. This series used existing technologies for the instruments and added system upgrades, including an improved power subsystem and enhanced satellite pointing accuracy. Furthermore, the GOES-N satellites were designed to operate longer than its predecessors. This series originally consisted of four satellites, GOES-N through GOES-Q. However, the option for the GOES-Q satellite was cancelled based on NOAA’s assessment that it would not need the final satellite to continue weather coverage. In particular, the agency found that the GOES satellites already in operation were lasting longer than expected and that the first satellite in the next series could be available to back up the last of the GOES-N satellites. As noted earlier, the first GOES-N series satellite—GOES-13—was launched in May 2006. The GOES-O and GOES-P satellites are currently in production and are expected to be launched in July 2008 and July 2011, respectively. NOAA is currently planning to procure the next series of GOES satellites, called the GOES-R series. NOAA is planning for the GOES- R program to improve on the technology of prior GOES series, both in terms of system and instrument improvements. The system improvements are expected to fulfill more demanding user requirements and to provide more rapid information updates. Table 8 highlights key system-related improvements that GOES-R is expected to make to the geostationary satellite program. The instruments on the GOES-R series are expected to increase the clarity and precision of the observed environmental data. Originally, NOAA planned to acquire 5 different instruments. The program office considered two of the instruments—the Advanced Baseline Imager and the Hyperspectral Environmental Suite—to be the most critical because they would provide data for key weather products. Table 9 summarizes the originally planned instruments and their expected capabilities. After our report was issued, NOAA officials told us that the agency decided to cancel its plans for the development of the Hyperspectral Environmental Suite, but expected to explore options to ensure the continuity of data provided by the current GOES series. Additionally, NOAA reduced the number of satellites in the GOES-R series from four to two satellites. NOAA is nearing the end of the preliminary design phase of its GOES-R system, which was initially estimated to cost $6.2 billion and scheduled to have the first satellite ready for launch in 2012. At the time of our most recent review in September 2006, NOAA had issued contracts for the preliminary design of the overall GOES-R system to three vendors and expected to award a contract to one of these vendors in August 2007 to develop the satellites. In addition, to reduce the risks associated with developing new instruments, NOAA issued contracts for the early development of two instruments and for the preliminary designs of three other instruments. However, analyses of the GOES-R program cost—which in May 2006 the program office estimated could reach $11.4 billion—led the agency to consider reducing the scope of requirements for the satellite series. In September 2006, NOAA officials reported that the agency had made a decision to reduce the scope and complexity of the GOES-R program by reducing the number of satellites from 4 to 2 and canceling a technically complex instrument—called the Hyperspectral Environmental Suite. As of July 2007, agency officials reported that they are considering further changes to the scope of the program, which are likely to affect the overall program cost. We have work under way to evaluate these changes. NOAA has taken steps to implement lessons learned from past satellite programs, but more remains to be done. As outlined previously, key lessons from these programs include the need to (1) establish realistic cost and schedule estimates, (2) ensure sufficient technical readiness of the system’s components prior to key decisions, (3) provide sufficient management at government and contractor levels, and (4) perform adequate senior executive oversight to ensure mission success. NOAA established plans to address these lessons by conducting independent cost estimates, performing preliminary studies of key technologies, placing resident government offices at key contractor locations, and establishing a senior executive oversight committee. However, many steps remain to fully address these lessons. Specifically, at the time of our review, NOAA had not yet developed a process to evaluate and reconcile the independent and government cost estimates. In addition, NOAA had not yet determined how it will ensure that a sufficient level of technical maturity will be achieved in time for an upcoming decision milestone, nor had it determined the appropriate level of resources it needs to adequately track and oversee the program using earned value management. Until it completes these activities, NOAA faces an increased risk that the GOES-R program will repeat the increased cost, schedule delays, and performance shortfalls that have plagued past procurements. To improve NOAA’s ability to effectively manage the GOES-R procurement, in our September 2006 report, we made recommendations to the Secretary of Commerce to direct its NOAA Program Management Council to establish a process for objectively evaluating and reconciling the government and independent life cycle cost estimates once the program requirements are finalized; to establish a team of system engineering experts to perform a comprehensive review of the Advanced Baseline Imager instrument to determine the level of technical maturity achieved on the instrument before moving the instrument into production; and to seek assistance in determining the appropriate levels of resources needed at the program office to adequately track and oversee the contractor’s earned value management data. In written comments at that time, the Department of Commerce agreed with our recommendations and provided information on its plans to implement our recommendations. In summary, both the NPOESS and GOES-R programs are critical to developing weather forecasts, issuing severe weather warnings for events such as hurricanes, and maintaining continuity in environmental and climate monitoring. Over the last several years, the NPOESS program experienced cost, schedule, and technical problems, but has now been restructured and is making progress. Still, technical and programmatic risks remain. The GOES-R program has incorporated lessons from other satellite acquisitions, but still faces challenges in establishing the management capabilities it needs and in determining the scope of the program. We have work under way to evaluate the progress and risks of both NPOESS and GOES-R in order to assist with congressional oversight of these critical programs. Mr. Chairman, this concludes my statement. I would be happy to answer any questions that you or members of the committee may have at this time. If you have any questions on matters discussed in this testimony, please contact me at (202) 512-9286 or by e-mail at [email protected]. Other key contributors to this testimony include Carol Cha, Kathleen S. Lovett, and Colleen Phillips (Assistant Director). This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Environmental satellites provide data and imagery that are used by weather forecasters, climatologists, and the military to map and monitor changes in weather (including severe weather such as hurricanes), climate, the oceans, and the environment. Two current acquisitions are the $12.5 billion National Polar-orbiting Operational Environmental Satellite System (NPOESS) program--which is to replace two existing polar-orbiting environmental satellite systems--and the planned $7 billion Geostationary Operational Environmental Satellites-R (GOES-R) program, which is to replace the current series of satellites due to reach end of their useful lives in approximately 2012. GAO was asked to summarize its past work on the progress and challenges facing these key environmental satellite acquisitions. Both the NPOESS and GOES-R satellite acquisitions are costly, technically complex, and critically important to weather forecasting and climate monitoring. NPOESS was originally estimated to cost about $6.5 billion over the 24-year life of the program, with its first satellite launch planned for April 2009. Over the last few years, NPOESS experienced escalating costs, schedule delays, and technical difficulties. These factors led to a June 2006 decision to restructure the program thereby decreasing the program's complexity by reducing the number of sensors and satellites, increasing its estimated cost to $12.5 billion, and delaying the launches of the first two satellites to 2013 and 2016. Since that time, the program office has made progress in restructuring the satellite acquisition and establishing an effective management structure; however, important tasks remain to be done and significant risks remain. The GOES-R acquisition, originally estimated to cost $6.2 billion and scheduled to have the first satellite ready for launch in 2012, is at a much earlier stage in its life cycle than NPOESS. In September 2006, GAO reported that the National Oceanic and Atmospheric Administration (NOAA) had issued contracts for the preliminary design of the overall GOES-R system to three vendors and expected to award a contract to one of these vendors in August 2007 to develop the satellites. However, analyses of GOES-R cost--which in May 2006 was estimated to reach $11.4 billion--led the agency, in September 2006, to reduce the program's scope from four to two satellites and to discontinue one of the critical sensors. Program officials now report that they are reevaluating that decision and may further revise the scope and requirements of the program in coming months. GAO also reported that NOAA had taken steps to implement lessons learned from past satellite programs, but more remained to be done to ensure sound cost estimates and adequate system engineering capabilities. GAO currently has work under way to evaluate GOES-R risks and challenges. |
Military ranges and training areas are used primarily to test weapons systems and train military forces. Required facilities include air ranges for air-to-air, air-to-ground, drop zone, and electronic combat training; live-fire ranges for artillery, armor (e.g., tanks), small arms, and munitions training; ground maneuver ranges to conduct realistic force-on-force and live-fire training at various unit levels; and sea ranges to conduct ship maneuvers for training. According to DOD officials, a slow but steady increase in encroachment problems has limited the use of training facilities and the gradual accumulation of these problems increasingly threatens training readiness. DOD has identified eight encroachment issues: Designation of critical habitat under the Endangered Species Act of 1973. Under the act, agencies are required to ensure that their actions do not destroy or adversely modify habitat that has been designated for endangered or threatened species. Currently, over 300 such species are found on military installations. Application of environmental statutes to military munitions. DOD believes that the Environmental Protection Agency could apply environmental statutes to the use of military munitions, shutting down or disrupting military training. According to DOD officials, uncertainties about the future application and enforcement of these statutes limit the officials’ ability to plan, program, and budget for compliance requirements. Competition for frequency spectrum. The telecommunications industry is pressuring for the reallocation of some of the radio frequency spectrum from federal to commercial control. DOD claims that over the past decade, it has lost about 27 percent of the frequency spectrum allocated for aircraft telemetry. And we previously reported that additional reallocation of spectrum could affect space systems, tactical communications, and combat training. Marine regulatory laws that require consultation with regulators when a proposed action may affect a protected resource. Defense officials say that the process empowers regulators to impose potentially stringent measures to protect the marine environment from the effects of proposed training. Competition for airspace. Increased airspace congestion limits pilots’ ability to train to fly as they would in combat. Clean Air Act requirements for air quality. DOD officials believe that the act requires controls over emissions generated on DOD installations. New or significant changes in range operations also require emissions analyses, and if emissions exceed specified thresholds, they must be offset with reductions elsewhere. Laws and regulations mandating noise abatement. DOD officials state that weapons systems are exempt from the Noise Control Act of 1972, but DOD must still assess the impact of noise under the National Environmental Policy Act. As community developments have expanded closer to military installations, concerns over noise from military operations have increased. DOD officials report that pressure from groups at the local, regional, and state levels can serve to restrict or reduce military training. Urban growth. DOD says that unplanned or “incompatible” commercial or residential development near training ranges compromises the effectiveness of training activities. Local residents have filed lawsuits charging that military operations lowered the value or limited the use of their property. To the extent that encroachment adversely affects training readiness, opportunities exist for the problems to be reported in departmental and military service readiness reports. The Global Status of Resources and Training System is the primary means that units use to report readiness against designed operational goals. The system’s database indicates, at selected points in time, the extent to which units possess the required resources and training to undertake their wartime missions. In addition, DOD is required under 10 U.S.C. 117 to prepare a quarterly readiness report to Congress. The report is based on briefings to the Senior Readiness Oversight Council, a forum assisted by the Defense Test and Training Steering Group. In June 2000, the council directed the steering group to investigate encroachment and develop and recommend a comprehensive plan of action. The secretaries of the military services are responsible for training personnel and for maintaining their respective training ranges and facilities. Within the Office of the Secretary of Defense, the Under Secretary of Defense for Personnel and Readiness develops policies, plans, and programs to ensure the readiness of the force and provides oversight on training; the Deputy Under Secretary of Defense for Installations and Environment develops policies, plans, and programs for DOD’s environmental, safety, and occupational health programs, including compliance with environmental laws, conservation of natural and cultural resources, pollution prevention, and explosive safety; and the Director, Operational Test and Evaluation, provides advice on tests and evaluations. Over time, the impact of encroachment on training ranges has gradually increased. Because most encroachment problems are caused by population growth and urban development, these problems are expected to increase in the future. Although the effects vary by service and by individual installation, encroachment has generally limited the extent to which training ranges are available or the types of training that can be conducted. This limits units’ ability to train as they would expect to fight and causes work- arounds that may limit the amount or quality of training. Installations overseas reported facing similar training constraints. Below are brief descriptions of some of the problems as reported by the installations and organizations we visited in the continental United States. Marine Corps Base Camp Pendleton, California. Camp Pendleton officials report encroachment problems related to endangered species and their habitat, urbanization, air space, and noise. Recently, about 10 percent of the installation has been designated as critical habitat for endangered species. Airspace restrictions limit the number of days that weapons systems can be employed, and noise restrictions limit night helicopter operations. Fort Lewis and the Yakima Training Center, Washington. Fort Lewis officials report encroachment problems related to noise, air quality, endangered species and their habitat, urbanization, frequency spectrum, and munitions and munitions components. In response to local complaints, Fort Lewis voluntarily ceased some demolitions training. Air quality regulations restrict the operation of smoke generators at Fort Lewis. Habitat considerations restrict maneuvers and off-road vehicle training in parts of both installations. There is periodic communications interference. Nellis Air Force Base and Nevada Test and Training Range, Nevada. Nellis Air Force Base has encroachment problems stemming from urbanization and noise. Nellis officials said that urban growth near the base and safety concerns have restricted the flight patterns of armed aircraft, causing mission delays and cancellations. They also report that the two installations receive a total of some 250 complaints about noise each year. Eglin Air Force Base, Florida. Eglin Air Force Base officials report encroachment problems involving endangered species habitat, noise, urban growth, and radio frequency spectrum. Eglin contains habitat for two endangered species. Aircraft must alter flight paths to avoid commercial towers and noise-sensitive areas. The base’s major target control system receives frequency interference from nearby commercial operators. U.S. Atlantic Fleet. Atlantic Fleet officials report encroachment problems stemming from endangered marine mammals and noise. Live- fire exercises at sea are restricted, and night live-fire training is not allowed. Naval Air Station Oceana, Virginia, is the target of frequent noise complaints. Special Operations Command. This command owns no training ranges of its own and largely depends on others for the use of their training ranges. The Navy component of the Special Operations Command reports being most directly affected by encroachment from endangered species and urban development. A variety of endangered species live on the training areas used by the Navy Special Warfare Command in California, particularly on Coronado and San Clemente islands. Because of environmental restrictions, Navy Special Warfare units can no longer practice immediate action drills on Coronado beaches; they cannot use training areas in Coronado for combat swimmer training; and they cannot conduct live-fire and maneuver exercises on much of San Clemente Island during some seasons. The Special Operations Command has previously been able to mitigate deficiencies in local training areas by traveling to alternate training sites. However, recent limitations on the amount of time that units can spend away from their home station have required new solutions. The command is requesting funding for new environmental documentation in its budget to protect assets in California and is integrating its encroachment mitigation efforts with DOD and the services. DOD and military service officials said that many encroachment issues are related to urbanization around military installations. They noted that most, if not all, encroachment issues result from population growth and urbanization and that growth around DOD installations is increasing at a rate higher than the national average. According to DOD officials, new residents near installations often view military activities as an infringement on their rights, and some groups have organized in efforts to reduce operations such as aircraft and munitions training. At the same time, according to one Defense Department official, the increased speed and range of weapons systems are expected to increase training range requirements. Our recent report on training limitations overseas noted that, while some restrictions are longstanding, the increase in restrictions facing U.S. forces in many cases is the result of growing commercial and residential development affecting established training areas and ranges. Despite the loss of some training range capabilities, service readiness data do not indicate that encroachment has significantly affected training readiness. Even though in testimonies and during many other occasions DOD officials have cited encroachment as preventing the services from training as they would like, DOD’s primary readiness reporting system does not reflect the extent to which encroachment is a problem. In fact, it rarely cites training range limitations at all. Similarly, DOD’s quarterly reports to Congress, which should identify specific readiness problems, hardly ever mention encroachment as a problem. I should also note that our recent assessment of training limitations overseas (which are often greater than those found stateside) found that units abroad rarely report lower training readiness in spite of concerns cited by DOD officials that training constraints overseas can require work-arounds or in some instances prevent training from being accomplished. Although readiness reporting can and should be improved to address training degradation due to encroachment and other factors, it will be difficult for DOD to fully assess the impact of encroachment on its training capabilities and readiness without (1) obtaining more complete information on both training range requirements and the assets available to support those requirements and (2) considering to what extent other complementary forms of training may help mitigate some of the adverse impacts of encroachment. The information is needed to establish a baseline for measuring losses or shortfalls. A full assessment of the effects of encroachment on training capabilities and readiness will be limited without better information on the services’ training range requirements and limitations and on the range resources available to support those requirements. Each service has, to varying degrees, assessed its training range requirements. For example, the Marine Corps has completed one of the more detailed assessments among the services concerning the degree to which encroachment has affected the training capability of Camp Pendleton. The assessment determined to what extent Camp Pendleton could support the training requirements of two unit types (a light armored reconnaissance platoon and an artillery battery) and two specialties (a mortar man and a combat engineer) by identifying the tasks that could be conducted according to standards in a “continuous” operating scenario (e.g., an amphibious assault and movement to an objective) or in a fragmented manner (tasks completed anywhere on the camp). The analysis found that from 60 to 69 percent of the training tasks in the continuous scenario and from 75 to 92 percent of the tasks in the fragmented scenario could be conducted according to standards. Some of the tasks that could not be conducted according to standards were the construction of mortar- and artillery-firing positions outside of designated areas, cutting of foliage to camouflage positions, and terrain marches. Marine Corps officials are completing a further analysis of four other types of units or specialties at Camp Pendleton and said they might expand the effort to other installations. However, none of the services’ studies have comprehensively reviewed available range resources to determine whether assets are adequate to meet needs, and they have not incorporated an assessment of the extent that other types of complementary training could help offset shortfalls. We believe that relying solely on the basis of live training, these assessments may overstate an installation’s problems and do not provide a complete basis for assessing training range needs. A more complete assessment of training resources should include assessing the potential for using virtual or constructive simulation technology to augment live training. While these types of complementary training cannot replace live training and cannot eliminate encroachment, they may help mitigate some training range limitations. Stated another way, these actions are not meant to take the place of other steps to deal with encroachment, but they are key to more fully depicting the net effects of encroachment on training capabilities now and in the future. Furthermore, to the extent that the services do have inventories of their training ranges, they do not routinely share them with each other (or with other organizations such as the Special Operations Command). While DOD officials acknowledge the potential usefulness of such data, there is no directory of DOD-wide training areas, and commanders sometimes learn about capabilities available outside their own jurisdiction by chance. All this makes it extremely difficult for the services to leverage adequate assets that may be available in nearby locations, increasing the risk of inefficiencies, lost time and opportunities, delays, added costs, and reduced training opportunities. Although the services have been known to share training ranges, these arrangements are generally made through individual initiatives, not through a formal or organized process that easily and quickly identifies all available infrastructure. Navy Special Operations forces only recently learned, for example, that some ranges at the Army’s Aberdeen Proving Grounds, Maryland, are accessible from the water—a capability that is a key requirement for Navy team training. Given DOD’s increasing emphasis on joint capabilities and operations, having an inventory of DOD-wide training assets and capabilities would seem to be a logical step toward a more complete assessment of training range capabilities and shortfalls that may need to be addressed. While some service officials have cited increasing costs because of work- arounds related to encroachment, the services’ data systems do not capture these costs in any comprehensive manner. At the same time, DOD’s overall environmental conservation program funding, which also covers endangered species management, has fluctuated with only a modest gain over the past 6 years, increasing in fiscal years 1996-98, but then dropping among all components, except for the Army. Total DOD conservation program obligations fluctuated, increasing from $105 million in fiscal year 1996 to $136 million in fiscal years 1998-99, and then decreasing to $124 million in fiscal year 2001. DOD documents attribute the fluctuation in conservation program obligations to increased costs from preparing Integrated Natural Resource Management Plans. Senior DOD officials recognized the need for a comprehensive plan to address encroachment back in November 2000, but they have not yet finalized such a plan. The task was first given to a working group of subject matter experts, who drafted plans of action for addressing the eight encroachment issues. The draft plans include an overview and analysis of the issue, and current actions being taken, as well as recommended short-, mid-, and long-term strategies and actions to address the issue. Examples of the types of future strategies and actions identified in the draft plans include the following: Enhancing outreach efforts to build and maintain effective working relationships with key stakeholders by making them aware of DOD’s need for ranges and airspace, its need to maintain readiness, and its need to build public support for sustaining training ranges. Developing assessment criteria to determine the cumulative effect of all encroachment restrictions on training capabilities and readiness. The draft plan noted that while many examples of endangered species/critical habitat and land use restrictions are known, a programmatic assessment of the effect that these restrictions pose on training readiness has never been done. Ensuring that any future base realignment and closure decisions thoroughly scrutinize and consider the potential encroachment impact and restrictions on the operations of and training for recommended base realignment actions. Improving coordinated and collaborative efforts between base officials and city planners and other local officials in managing urban growth. At the time we completed our review, the draft action plans had not been finalized. DOD officials told us that they consider the plans to be working documents and stressed that many concepts remain under review and may be dropped, altered, or deferred, while other proposals may be added. No details were available on the overall actions planned, clear assignments of responsibilities, measurable goals and time frames for accomplishing planned actions, or funding requirements—information that would be needed in a comprehensive plan. Although DOD has not yet finalized a comprehensive plan of actions for addressing encroachment issues, it has made progress in several areas. It has taken or is in the process of taking a number of administrative actions that include the following: DOD has finalized, and the services are tasked with implementing, a Munitions Action Plan—an overall strategy for addressing the life-cycle management of munitions to provide a road map that will help DOD meet the challenges of sustaining its ranges. DOD formed a Policy Board on Federal Aviation Principles to review the scope and progress of DOD activities and to develop the guidance and process for managing special use air space. DOD formed a Clean Air Act Services’ Steering Committee to review emerging regulations and to work with the Environmental Protection Agency and the Office of Management and Budget to protect DOD’s ability to operate. DOD implemented an Air Installation Compatible Use Zone Program to assist communities in considering aircraft noise and safety issues in their land-use planning. DOD is drafting a directive that establishes the department’s policy on the Sustainment of Ranges and Operating Areas to serve as the foundation for addressing range sustainability issues. The directive, currently in coordination within DOD, would outline a policy framework for the services to address encroachment on their ranges and direct increased emphasis on outreach and coordination efforts with local communities and stakeholders. In addition, the department is preparing separate policy directives to establish a unified noise abatement program and to specify the outreach and coordination requirements highlighted in the sustainable ranges directive. DOD is also seeking legislative actions to help deal with encroachment issues. In December 2001, the Deputy Secretary of Defense established a senior-level Integrated Product Team to act as the coordinating body for encroachment efforts and to develop a comprehensive legislative and regulatory set of proposals by January 2002. The team agreed on a set of possible legislative proposals for some encroachment issues. After internal coordination deliberations, the proposals were submitted in late April 2002 to Congress for consideration. According to DOD, the legislative proposals seek to “clarify” the relationship between military training and a number of provisions in various conservation statutes, including the Endangered Species Act, the Migratory Bird Treaty Act, the Marine Mammal Protection Act, and the Clean Air Act. DOD’s proposals would, among other things, do the following: Preclude designation under the Endangered Species Act of critical habitat on military lands for which Integrated Natural Resources Management Plans have been completed pursuant to the Sikes Act. At the same time, the Endangered Species Act requirement for consultation between DOD and other agencies on natural resource management issues would remain. Permit DOD to “take” migratory birds under the Migratory Bird Treaty Act without action by the Secretary of the Interior, where the removal would be in connection with readiness activities, and require DOD to minimize the removal of migratory birds to the extent practicable without diminishment of military training or other capabilities, as determined by DOD. Modify the definition of “harassment” under the Marine Mammal Protection Act as it applies to military readiness activities. | The following eight "encroachment" issues are hampering the military's ability to carry out realistic training: endangered species' critical habitat, unexploded ordnance and munitions, competition for radio frequency spectra, protected marine resources, competition for airspace, air pollution, noise pollution, and urban growth around military installations. Officials at all the installations and major commands GAO visited in the continental United States reported that encroachment had affected some of their training range capabilities, requiring work-arounds that are unrealistic. Service officials believe that population growth is responsible for current encroachment problems in the United States and is likely to cause more training range losses in the future. Despite concerns about encroachment, military readiness reports do not indicate the extent to which encroachment is harming training. Improvements in readiness reporting can better reveal shortfalls in training, but the ability to fully assess training limitations and their impact on capabilities and readiness will be limited without (1) more complete baseline data on training range capabilities, limitations, and requirements and (2) consideration of how live training capabilities may be complemented by training devices and simulations. Progress in addressing individual encroachment issues has been made, but more will be required to comprehensively plan for encroachment. Legislation proposed by the Department of Defense to "clarify" the relationship between military training and various environmental statues may require trade-offs between environmental policy and military training objectives. |
Effective oversight is a key management tool. The United Nations has both internal and external accountability and oversight mechanisms. Internal oversight units usually report directly to the executive heads of organizations, while external oversight mechanisms generally report to the governing bodies of organizations. Appendix I provides an overview of the external oversight mechanisms in the U.N. system. Until 1993, the major internal oversight functions of the Secretariat were carried out by units within the Department of Administration and Management, but they were not considered very effective because they lacked independence and were often disregarded by managers. In August 1993, the Secretary General formed the Office for Inspections and Investigations under an Assistant Secretary General. This office was not part of the Department of Administration and Management and carried more authority than the individual units because it reported to the Secretary General. In July 1994, the General Assembly created an oversight body with even more independence and authority—OIOS—to supersede the Office for Inspections and Investigations. OIOS is considered by the United Nations and its member states to be an internal oversight office and, therefore, part of the executive function of the United Nations. OIOS is mandated to exercise its oversight functions throughout the U.N. Secretariat, which covers the staff and resources of the Secretariat, including peacekeeping missions. For 1996-97, OIOS had oversight authority over $7 billion. In addition, OIOS has been asked to provide audit coverage for some independent entities, such as the U.N. Joint Staff Pension Fund. For the current biennium (1996-97), OIOS has 123 authorized positions and a $21.6 million budget, of which $15 million comes from the regular U.N. budget and $6.6 million comes from extrabudgetary resources. Of the 123 positions, 10 are vacant, 36 are funded by extrabudgetary contributions, and 6 are resident auditors with U.N. peacekeeping missions. OIOS also has one staff member on nonreimbursable loan from a member state. It has 88 staff in New York, 21 in Geneva, and 8 in Nairobi. OIOS has four operational units—the Audit and Management Consulting Division, the Investigations Section, the Central Monitoring and Inspection Unit, and the Central Evaluation Unit. Investigation was a new oversight activity introduced in February 1994, and inspections were also a new function assigned to the Office for Inspections and Investigations. The functions of the other units had been performed within the Department of Administration and Management for years. The Under Secretary General (USG) for Internal Oversight Services provides overall management of OIOS’ activities and monitors the status of implementation of OIOS recommendations. Appendix II provides an organization chart of OIOS and a description of each unit’s functions, budget, and staffing. OIOS’ oversight does not extend to U.N. specialized agencies, such as the World Health Organization, the International Labor Organization, and the Food and Agriculture Organization, or the International Atomic Energy Agency. Specialized agencies are not under the authority of the Secretary General or funded through the regular U.N. budget; rather, they are autonomous bodies with their own governing boards, resources, and oversight mechanisms. According to OIOS officials, they regularly exchange views with the agencies’ oversight offices in order to coordinate and strengthen oversight on a U.N.-wide basis. To judge an organization’s operational independence, one must determine whether (1) the organization’s mandate and procedures establish conditions under which it can be operationally independent and (2) the organization exercises its authority and prerogatives in an independent manner. While our examination of the U.N. resolution creating OIOS, the Secretary General’s Bulletin establishing OIOS, and OIOS’ operating procedures showed that OIOS is in a position to be operationally independent, we could not test whether OIOS exercised its authority and implemented its procedures in an independent manner because OIOS would not provide us access to certain audit and investigation reports and its working papers. Unrestricted access to OIOS’ records and files and such reviews and tests that we could have conducted would have served as indicators of how OIOS was exercising its independence. A primary characteristic of an effective internal oversight office is its operational independence; however, the term is not easily defined and even harder to measure in practice. Operational independence is a concept rather than a discrete set of factors that can be tracked over time. Among other things, operational independence includes insulating the unit head from arbitrary removal from office; organizationally separating the unit from the programs it examines; and ensuring the unit has full and free access to relevant records, the authority to carry out work it sees fit, and the ability to report its findings without interference from the executive or the legislature. It is also important that the oversight unit’s mandate and independent status be well understood among the community it oversees.OIOS has many of these characteristics, as noted below. The USG for Internal Oversight Services is appointed by the Secretary General, following consultations with member states, for a fixed 5-year term but must be approved by the General Assembly and may be removed by the Secretary General only for cause and with the General Assembly’s approval. OIOS recently established an administrative unit within the USG’s office. As a result, OIOS no longer has to rely on the Department of Administration and Management for basic administrative services. OIOS’ staffing administration is handled by the Office of Human Resources Management. OIOS is subject to U.N. geographical and gender diversity requirements and, in some cases, special language requirements. However, the USG for Internal Oversight Services is authorized to appoint, promote, and terminate staff—powers similar to those delegated by the Secretary General to the heads of separately administered U.N. funds and programs but not to other USGs. Like other departments, OIOS’ budget is a separate line item in the overall U.N. Secretariat’s budget. Unlike other departments within the Secretariat, the USG for Internal Oversight Services may directly inform the General Assembly about the adequacy of OIOS’ budget and staffing levels. OIOS’ mandate provides the authority to initiate, carry out, and report on any action that it considers necessary to fulfill its mandate and responsibilities. OIOS may not be prohibited or hindered from carrying out any action within the purview of its mandate by the Secretary General or any other party. Although the Secretary General’s Bulletin states that the responsibilities of OIOS extend to the “separately administered organs,” OIOS’ role in providing internal oversight services for separately administered U.N. funds and programs has been questioned by a few member states. Funds and programs are funded either completely or in part by voluntary contributions and have their own executive boards or governing bodies. According to OIOS officials, OIOS has provided oversight services primarily for those areas for which the funds and programs do not provide their own oversight coverage. For example, the U.N. Development Program has its own internal audit, but OIOS provides investigative services. For the U.N. High Commissioner for Refugees, which does not have an internal audit unit, OIOS provides both audit and some investigative services. However, in recent sessions of the U.N. Committee on Administration and Budget—commonly known as the Fifth Committee—certain member states have made it clear that they do not accept the authority of the Secretary General to implement OIOS recommendations in the funds and programs without explicit direction from the executive boards of these entities. If OIOS is required to seek the concurrence of the various funds and programs executive boards before cognizant program officials can implement its recommendations, OIOS’ operational independence would be compromised. According to State officials, the United States and several other delegations have made this point clear. They have emphasized that OIOS is an internal oversight mechanism and part of the U.N. Secretariat and, therefore, its recommendations are not subject to the review or approval of the General Assembly or the respective executive boards of the separately administered funds and programs. As noted in its comments on this report, the State Department has consistently taken the position that OIOS’ jurisdiction extends to the separately administered funds and programs. State emphasized that the U.N. Legal Advisor confirmed this understanding of the relationship in July 1994. Additionally, at the request of the USG for Internal Oversight Services, the U.N. Legal Counsel specifically ruled in October 1997 that OIOS can make recommendations to program managers within the U.N. Secretariat, including the funds and programs, without the endorsement or approval of the General Assembly. While this decision did not explicitly refer to the funds and programs executive boards, the situation is analogous. At the time of this report, OIOS was continuing to provide internal oversight services to the funds and programs and reporting to the cognizant officials without seeking the approval of the respective executive boards. However, although U.S. State Department officials and the USG for Internal Oversight Services are confident that OIOS’ position will prevail, the issue of OIOS’ relationship to funds and programs will likely come before the Fifth Committee again. An area of concern is how OIOS has implemented its reporting mechanism. OIOS can provide its reports without interference to the Secretary General and General Assembly. Its mandate states that OIOS “shall submit to the Secretary General reports that provide insight into the effective utilization and management of resources and the protection of assets” and that “all such reports are made available to the General Assembly as submitted by the Office.” The USG for Internal Oversight Services determines which reports are provided to the Secretary General, and we noted that OIOS has provided seven of eight inspection reports to the Secretary General and the General Assembly and that all six in-depth evaluation reports it has done were provided to the Committee for Programme and Coordination—a committee of the General Assembly. However, we also noted that, as of September 1997, only 13 of 107 audit reports and 5 of 33 investigation reports were provided to the Secretary General and, subsequently, the General Assembly. This raises two questions: If only 18 of 140 audit and investigation reports met the mandate’s criteria for being provided to the Secretary General, are OIOS’ resources directed at those areas that would provide insight into the operations of the United Nations? On the other hand, if more OIOS reports actually provide the insight into U.N. operations intended in the mandate, why were they not provided to the Secretary General and the General Assembly? We were not permitted to review the reports that had not been provided to the Secretary General. Consequently, we could not address these questions. In contrast, we note that all U.S. inspector general reports are provided to the head of the respective inspector general’s department or agency and many are also provided to the U.S. Congress. The USG for Internal Oversight Services acknowledged that few audit and investigation reports had been forwarded to the Secretary General and the General Assembly. He told us that if he is satisfied with the program officials’ response to the report and is confident that appropriate actions are being taken, he does not send the report to the Secretary General. He also noted that sending all OIOS reports would place an additional paperwork burden on the General Assembly and that producing enough reports for each member state would be expensive. The USG for Internal Oversight Services said that beginning with the OIOS annual report due to be published in October 1997, he will list all OIOS reports and if a member state is interested in a particular one, OIOS will brief its representatives.He also said that previous annual reports have referred to the conclusions and findings in many of OIOS’ reports and he has been willing to brief member states on the topics addressed. OIOS and the Department of Administration and Management have made efforts to communicate the scope of OIOS’ operational independence to U.N. staff. In January 1995, the USG for Administration and Management issued to all U.N. staff an administrative instruction providing guidance on the personnel arrangements for OIOS. The instruction outlined the administrative arrangements and the authority of the USG for Internal Oversight Services in personnel matters. In February 1996, the U.N. Department of Public Information distributed a U.N. publication describing OIOS’ role and purpose.In April 1996, the USG for Administration and Management issued a note reminding Secretariat department heads that the General Assembly’s resolution establishing OIOS makes it clear that OIOS shall have the authority necessary to carry out its functions. The note provided detailed information regarding the procedures to be followed to ensure that OIOS is given immediate access to all files and records required to perform its important functions. In early 1997, the Investigations Section published an investigations manual that is available to all U.N. staff. This manual contains information on the jurisdiction of the Investigations Section; hot-line procedures; investigative access to staff, records, sites, and materials of the United Nations; rights of persons subject to investigation; and protection for whistleblowers. The manual is also on the U.N. Secretariat’s intranet. Although OIOS is similar to U.S. inspector general offices, including the way it emphasizes operational independence and access to relevant records and cognizant officials, it is different in many respects. For example, most U.S. inspectors general are not appointed for a fixed term and can be removed without approval of the Congress. Also, while U.S. inspector general offices have the audit and investigation functions specifically mandated like OIOS, the inspections, monitoring, and evaluation roles are not. See appendix III for a more complete comparison of OIOS with U.S. inspector general offices. In February 1997, the USG for Internal Oversight Services stated in a memorandum to the USG for Administration and Management that OIOS will have sufficient resources to carry out its functions if the proposed 1998-99 budget is approved. The head of each OIOS unit, including the Investigations Section, which had been singled out by OIOS officials as particularly hampered by the lack of trained staff, concurred with this assessment. Since the establishment of OIOS, its regular U.N. budget has increased by 55 percent and its authorized positions have increased by 18.However, this did not happen without some difficulty. When OIOS was established in September 1994, it inherited the resources budgeted for the units whose functions it absorbed. In early December 1994, the USG for Internal Oversight Services described his vision for OIOS and the resource requirements necessary before the U.N. Fifth Committee and the Advisory Committee on Administrative and Budgetary Questions. He stated that he could not measurably enhance the internal control mechanisms in the United Nations without more resources. In particular, he pointed to the need to intensify the audit coverage and strengthen the new investigation function. The General Assembly reacted favorably, and OIOS, which had a total of 102 positions, was authorized 5 additional professional and 3 more general service positions against the revised budget estimates for 1995, bringing OIOS’ total number of positions to 110, including extrabudgetary positions. Nevertheless, in early 1996, only 42 of 53 professional positions within OIOS’ audit division were filled—a vacancy rate of almost 21 percent. OIOS also noted that the accumulated and new cases in the Investigations Section constituted a workload that was too large for the unit to clear. Moreover, with the establishment of OIOS, the Central Monitoring and Inspection Unit’s responsibilities expanded to address concerns of member states regarding the qualitative nature of program performance reporting and the need to enhance program management capability. At the beginning of the 1996-97 biennium and at the urging of some member states, particularly the United States, cuts in the overall U.N. budget were mandated by the General Assembly. While the OIOS 1996-97 budget included partial funding for an additional 12 positions, including 5 investigators, the USG for Administration and Management instructed OIOS to stay within its regular U.N. budget of $15 million, although OIOS had estimated it would need $15.725 million to fully fund its approved positions. (This did not directly affect extrabudgetary resources.) We were told by OIOS officials, and the USG for Internal Oversight Services noted in OIOS’ 1996 annual report, that OIOS should not be totally exempt from overall U.N. reductions. The USG for Internal Oversight Services said he took this position because it would be “politically unwise” to suggest that OIOS be treated differently than the rest of the organization. According to OIOS officials, because OIOS had to stay within its $15 million budget, it had to cut $725,000. To do this, OIOS did not fill its vacant positions for the first year of the biennium. This action saved about $603,000. The office also cut back expenditures for nonstaffing items for an additional savings of about $122,000. In January 1996, the General Assembly instituted a hiring freeze for the entire U.N. Secretariat. In March 1996, OIOS was granted a waiver as long as it followed through with its budget reductions. In February 1997, after the budgetary reductions had been achieved, OIOS began announcing its vacancies, and the recruitment and hiring process began. As part of U.N. efforts to maintain a flat, no-growth budget, the 1998-99 budget outline set OIOS’ regular budget at $15.1 million—about the same as the previous biennium. With that budget, OIOS could not have filled its vacancies. But after negotiations with the USG for Administration and Management, OIOS was granted an additional $3.5 million, including $1 million for exchange rate fluctuations and inflation, increasing OIOS’ regular budget to $18.6 million. According to OIOS, this increase will fully fund the 12 new positions approved by the General Assembly in 1995, a $127,300 (or 83 percent) increase in general operating expenses, and a $236,500 (or 43 percent) increase for travel expenses. In addition, the Department for Administration and Management “redeployed” one of its positions to OIOS to perform administrative support functions. The General Assembly still must approve the 1998-99 budget before it becomes final. For a comparison of OIOS’ budget and staffing with other U.N. Secretariat functions, see appendix IV. Internal oversight offices must ensure that the information developed in its audits, investigations, inspections, and evaluations is complete, relevant, and accurate. Clear guidelines or procedures can help ensure that the information presented, the conclusions reached, and the recommendations made can be relied upon as accurate, fair, and balanced. International auditing standards acknowledge that audit organizations often carry out activities that, by strict definition, do not qualify as audits. According to these standards, such organizations should establish a policy on which specific standards should be followed in carrying out nonaudit work. Although OIOS’ mandate does not require its reporting units to adhere to any particular quality assurance standards or, for that matter, to develop procedural manuals, OIOS’ audit division and the Investigations Section have developed them. Before OIOS’ audit division became part of OIOS, it developed “Standards for the Professional Practice of Internal Auditing” based on the auditing standards established by the Institute of Internal Auditors. These standards provide guidelines for maintaining independence, planning and conducting audits, and reporting audit findings and are incorporated into the audit division’s internal audit manual. This manual is being updated to reflect certain changes made since OIOS’ establishment. In addition, most of the audit division’s staff have advanced degrees and many are accountants. The staff have been trained in auditing techniques and take continued education courses, including those sponsored by the Institute of Internal Auditors and U.S. government agencies, including GAO. After OIOS was established and the Investigations Section head had been appointed, the section began developing an investigations manual, which includes standards for conducting its work. This manual is similar in many respects to those used by U.S. law enforcement agencies. According to a former Department of State employee who helped develop OIOS’ investigation’s manual, about a dozen manuals from other organizations, including some U.S. law enforcement agencies, were used to develop the manual. In addition, most of the investigation staff are trained in investigative techniques. The head of the unit and the senior investigator also have law degrees. The Central Monitoring and Inspection Unit and the Central Evaluation Unit do not have comparable manuals. The head of the Central Monitoring and Inspection Unit told us that the unit does not have written standards for conducting its work but employs a variety of quality control processes. These include requiring documentary evidence for factual information in inspection reports and using a newly implemented review process by OIOS unit heads to evaluate all draft inspection reports before they are finalized. The USG for Internal Oversight Services told us, however, that he recently directed that an inspections manual be developed. As currently planned, it would be completed in the spring of 1998. According to the head of the Central Evaluation Unit, the unit (1) conducts in-depth evaluations of U.N. programs as directed by a U.N. intergovernmental committee—the Committee for Programme and Coordination—and (2) provides methodological guidance for other departments to conduct self-evaluations. For the in-depth evaluations, he said the methodology is well known and each evaluation is conducted according to generally accepted evaluation methods and social science research techniques understood by the intergovernmental committee. In addition, he said, staff working in the unit are trained in evaluation methods. Regarding department self-assessments, he said his unit is in the process of updating a manual to help guide these evaluations, but he did not provide an estimate of when this effort would be completed. The lack of written guidance for conducting inspections may have led several U.N. officials to raise questions about two recent inspections reports. Specifically, they questioned certain facts and were concerned that comments prepared by the inspected organizations were not considered in preparing the final products. These officials alleged that both reports contained factual errors that OIOS neither acknowledged nor corrected. We could not validate these assertions because we did not have access to the necessary working papers and related documents and files; however, OIOS officials told us the comments provided did not address the facts and were just a different point of view. OIOS has no requirement to acknowledge that comments were received or considered in finalizing its inspection reports. OIOS officials also noted that, because of U.N. page limitations on published materials, it would have been difficult to respond in detail to all the comments, much less reproduce them in the report. Nevertheless, in an attempt to address these concerns, the USG for Internal Oversight Services told us that for reports he sends to the Secretary General he now requires program officials’ comments and OIOS’ response to be sent to the Secretary General for his consideration. While this may help, systematically acknowledging in OIOS’ reports that comments were received, summarizing the sense of them or reproducing them as part of the report, and describing how they were addressed by OIOS would, in our opinion, provide a further basis for the reader to judge the relevancy of the issues addressed and the recommendations made. In June 1997, OIOS compiled a document summarizing the quality assurance process used in OIOS for each of its reporting units. While the sections for audit and investigations largely draw on their respective manuals, quality assurance procedures for the other two units were, for the first time, delineated. However, we were not permitted access to OIOS’ documents to determine whether the procedures in the audit and investigations manuals or the June 1997 document were being followed. Since 1994, OIOS has made more than 3,000 recommendations. The Office of the USG for Internal Oversight Services maintains a central, automated database that includes a summary of each recommendation, the department responsible for implementing the recommendation, the OIOS staff member assigned to follow up on recommendations, and the status of implementation. In addition to the centralized database, each unit maintains a separate database to monitor compliance with all of its recommendations. In February 1995, OIOS issued guidance that outlines steps to be taken by OIOS staff and program managers from the completion of fieldwork to implementation of recommendations. According to OIOS officials, the units’ staff are responsible for tracking corrective actions managers take in response to recommendations and for determining when they have fully implemented recommendations. Program managers are responsible for implementing recommendations and reporting to OIOS on a regular basis on the status of implementation. OIOS’ mandate requires it to report semiannually to the Secretary General on the status of its recommendations in audit, investigation, and inspection reports. OIOS also reports annually to the Secretary General and the General Assembly on its significant audit, investigation, inspection, and evaluation recommendations for corrective action and on instances where program managers have failed to implement such recommendations. In December 1996, OIOS reported that managers had fully implemented about 68 percent of the audit recommendations the office has issued since October 1994. According to OIOS officials, in some cases, the office’s ability to monitor implementation of recommendations has been limited because some recommendations did not clearly state the cause of the problem or the action required. Our study of the few reports available to us bears this out. For example, a recommendation in one OIOS inspection report stated that “compliance with audit recommendations should be given the priority they deserve.” While OIOS officials said that the office does not want to be so prescriptive that program officials do not have flexibility in implementing the recommendations, more guidance is often needed. OIOS officials said the office has begun to focus on developing recommendations that are more specific to facilitate the monitoring of their implementation. In some cases, when recommendations were unclear, OIOS has established benchmarks to help program managers and OIOS staff assess progress toward achieving implementation. OIOS has established systems and special controls for providing confidentiality to whistleblowers and other informants who make reports in good faith to the Investigations Section. Such individuals may provide valuable information about potential areas of wrongdoing, but if they feel threatened by reprisals, they may not come forward. In OIOS, they are protected whether or not an investigation subsequently substantiates the report. The Investigations Section’s manual, which is available to all U.N. staff, provides information and guidance on specific requirements and procedures for protecting the identity of staff members and others making reports or suggestions and for safeguarding reports from accidental, negligent, or willful disclosure. For example, the manual states that the investigator assigned to the case is responsible and accountable for taking all appropriate measures for the protection of the identity of the complainant, and the section has established strict internal office procedures to avoid the disclosure of the complainant’s identity. In September 1994, the section began operating a “hotline” reporting facility, which provides direct, confidential access for those making complaints or suggestions by telephone, facsimile, or mail. The telephone hotline operates on a 24-hour, confidential basis. Through April 1997, 85 complaints and suggestions had been received through the hotline reporting facility. This amounted to about 15 percent of the reports to the Investigations Section. The hotline reporting facility accepts anonymous reports. However, according to the section chief, the majority of those making complaints or suggestions identify themselves. If the information received through the hotline proves to be accurate, the section uses it in such a way that the source cannot be identified, except with permission, according to the section’s manual. OIOS has also established mechanisms to protect individuals against possible reprisal for making reports, providing information, or otherwise cooperating with the office. The investigations manual states the section will pay prompt and careful attention to cases involving potential reprisals and will take interim steps, if necessary, to protect the whistleblower. The manual also notes that disciplinary proceedings will be initiated and disciplinary action taken against a staff member who is proven to have retaliated against an individual providing information to the section. Since 1994, U.N. staff members and others have provided more than 500 tips or leads to the Investigations Section. These reports included allegations of serious crimes and noncriminal violations, suggestions for improvements, cases involving personnel matters or other grievances, and requests for investigations by officials. Only a few leads specifically categorized by OIOS as coming from a whistleblower have resulted in investigative reports. (OIOS would not permit us to divulge how many.) OIOS officials told us it has followed up in a number of instances where reprisal was suspected but has taken disciplinary action in only one instance to protect whistleblowers from reprisal. According to OIOS’ second annual report, the section had initiated an investigation based on allegations by staff, who publicly identified themselves, that two senior staff were interfering with the decision-making process of the local committee on contracts. A senior staff member retaliated against the staff by accusing them of falsifying bid documents and recommending that charges be brought against them. The Investigations Section investigated the senior staff member’s allegations and found them to be false. OIOS then instituted charges against the senior staff member under the provision in the office’s mandate to provide for protection of those supplying information. OIOS has established itself as the internal oversight mechanism for the U.N. Secretary General. It is in position to be operationally independent, has overcome certain start-up problems, and has developed policies and procedures for much of its work. However, it can do more to help ensure that the information it presents, the conclusions it reaches, and the recommendations it makes can be relied upon as accurate, fair, and balanced. To this end, we discussed with the USG for Internal Oversight Services several ways to enhance OIOS’ future operations. First, we suggested that the USG for Internal Oversight Services clarify the criteria for providing OIOS reports to the Secretary General and the General Assembly. In response, he said he will begin publishing a listing of OIOS reports in his annual report and provide briefings to member states on request. While this will help publicize OIOS reports, it may not satisfy OIOS’ mandate to make reports available to the Secretary General and the General Assembly that “provide insight into the effective utilization and management of resources and the protection of assets.” Second, we suggested that the USG for Internal Oversight Services develop more formal written procedural guidance for the Central Monitoring and Inspection Unit and the Central Evaluation Unit. He agreed that an inspections manual and evaluation manual for U.N. department self-evaluations would be helpful and has taken initial steps to begin developing them. However, he disagreed that additional guidance is needed for the in-depth evaluations OIOS conducts at the direction of the Committee for Programme and Coordination. Third, we suggested that the USG for Internal Oversight Services develop formal procedures for addressing program officials’ comments in each OIOS report. The USG for Internal Oversight Services said he is beginning to send program officials’ comments and OIOS’ analysis of them to the Secretary General for reports forwarded to the Secretary General and the General Assembly. While this is a step in the right direction, systematically addressing program officials’ comments in all OIOS reports would help the reader judge the relevancy of the issues discussed and the recommendations made and lend credibility to the reports. Although OIOS has made considerable progress in resolving some initial operational problems, the USG for Internal Oversight Services can do more to help maintain OIOS’ independence and establish the office as the authoritative internal oversight mechanism the General Assembly intended OIOS to be. As previously noted, we suggested some actions that could be taken by the USG for Internal Oversight Services in this regard. To help focus attention on these matters, we recommend that the Secretary of State encourage the USG for Internal Oversight Services to address the noted suggestions. The Department of State and OIOS commented on a draft of this report. Their comments are reproduced in their entirety in appendixes V and VI, respectively. Both generally agreed with our overall conclusions and observations about OIOS’ first 3 years of operations. State also said it generally concurred with our suggestions to the USG for Internal Oversight Services. It noted that it has a vested interest in implementing steps to ensure that OIOS functions effectively through the provision of adequate resources and the maintenance of a highly skilled and competent professional staff. State reiterated its position that effective oversight of U.N. programs is of primary importance to the United States and looks forward to building on the significant progress that OIOS has made in this area. The USG for Internal Oversight Services said that, while OIOS has become an important and effective component of the U.N. management culture, its operations can be fine-tuned. However, he disagreed with our suggestions that OIOS revisit its criteria for sending reports to the Secretary General and the General Assembly and that OIOS should treat program officials’ comments more formally in its reports. With respect to OIOS report distribution, the USG for Internal Oversight Services reiterated that OIOS only provides its reports to the Secretary General and the General Assembly when the program officials disagree with the recommendations. As already stated, this criteria does not seem to satisfy a strict reading of the mandate. We also believe such limited report distribution is counter to one of OIOS’ intended purposes, which is to provide more visibility over management’s use of U.N. resources. In its publication on OIOS, the U.N. Department of Public Information states that, OIOS supports the need “for a more transparent assignment of responsibility and accountability.” It goes on to say that “OIOS puts great emphasis on transparency of procedures and full consultation with management.” Making more reports available to the member states would help enhance the desired transparency by publicizing reported problem areas, the steps taken to resolve them, and who is accountable. Such publicity, in turn, may also help prevent similar problems from recurring. The USG for Internal Oversight Services also said that providing more reports would only overload the General Assembly’s agenda and be an expensive burden. We believe the General Assembly should be allowed to judge for itself whether receiving a larger number of audit and investigative reports would be too burdensome. We also believe that the costs of reproducing copies of reports can be kept to a minimum by publicly announcing their availability, but only providing copies to those member states that request them or by making reports obtainable through the U.N. intranet. With respect to the treatment of program officials’ comments on OIOS reports, the USG for Internal Oversight Services said that these comments are transmitted to the Secretary General, but to reproduce dissenting views that the Secretary General does not endorse would be inappropriate. We disagree. We believe that treating program officials’ comments more openly and formally provides OIOS the opportunity to demonstrate that it is fair and evenhanded and that its conclusions and recommendations are appropriate. To do less leads to speculation and, perhaps, unwarranted criticism that program officials’ comments were not adequately considered and addressed in the final report. Both State and OIOS provided technical comments that have been incorporated in the report as appropriate. To determine whether OIOS is in position to be operationally independent, we reviewed its mandate and procedures and related U.N. documents and met with the USG for Internal Oversight Services and other OIOS officials to determine how OIOS has implemented these provisions. We also interviewed several U.S. officials who helped draft the resolution creating OIOS and the USG for Administration and Management, who was involved in helping establish OIOS and providing it with administrative services. To compare OIOS operations with those of U.S. inspectors general, we reviewed the Inspector General Act of 1978, as amended. To help provide a frame of reference for judging operational independence, we also reviewed the International Organization of Supreme Audit Institutions’ Auditing Standards. However, without access to OIOS’ reports that had not been provided to the Secretary General and the General Assembly and its working papers, we could not take the additional step of testing whether OIOS had exercised its authority and implemented its procedures in an independent manner. To determine whether OIOS has the necessary resources to carry out its mission, we reviewed the overall U.N. budget, budget and staffing documents for the Office for Inspections and Investigations and OIOS, and OIOS annual reports. We interviewed the USG for Internal Oversight Services, his special assistant, and OIOS unit chiefs. We also met with the USG for Administration and Management and officials in the U.N. Program, Planning, and Budgeting Division and the U.N. Office of Human Resources Management to discuss their roles in providing resources to OIOS. To determine whether OIOS had written policies and procedures in place for conducting its work, following up on its recommendations, and providing confidentiality to informants and protecting whistleblowers from possible reprisal, we reviewed the audit and investigations manuals and other written guidance made available to us. We discussed OIOS procedures and policies with the USG for Internal Oversight Services and the OIOS unit chiefs. However, as previously noted, we were not provided access to OIOS working papers or other records and files related to specific audits, investigations, or inspections. This restriction prevented us from determining whether (1) OIOS was adhering to its stated policies and procedures and (2) its analyses were adequate to support its reported findings and recommendations. We also met with representatives of the U.N. Board of Auditors, the U.S. Mission to the United Nations, and the U.S. State Department Bureau of International Organization Affairs. We discussed with them the origins of OIOS and their perceptions of its operations. We conducted our study from March to October 1997 in accordance with generally accepted government auditing standards. As you requested, we plan no further distribution of this report until 15 days after its issue date. At that time, we will send copies to other appropriate congressional committees, the U.N. USGs for Internal Oversight Services and Administration and Management, and the Secretary of State. Copies will also be made available to other interested parties upon request. This report was prepared under the direction of Benjamin F. Nelson, Director, International Relations and Trade Issues, who may be reached on (202) 512-4128 if you or your staff have any questions. Major contributors to this report are listed in appendix VII. This appendix provides a brief description of external oversight bodies for the U.N. system. These bodies are considered external oversight mechanisms because they generally report to the governing bodies of organizations. Unlike the Office of Internal Oversight Services (OIOS), which is an internal oversight body and has authority over the operations of the U.N. Secretariat and separately administered funds and programs, U.N. systemwide external bodies have oversight extending to all U.N. operations and the specialized agencies. The U.N. external oversight bodies have varying mandates, extremely broad areas to cover, and modest resources. The U.N. external oversight bodies are the General Assembly’s Advisory Committee on Administrative and Budgetary Questions, the Board of Auditors, the Panel of External Auditors, the General Assembly and the Economic and Social Council’s Committee for Programme and Coordination, the International Civil Service Commission, and the Joint Inspection Unit. (See table I.1 for an overview of the U.N. external oversight mechanisms.) Each specialized agency, as well as the International Atomic Energy Agency, has its own external auditor that is responsible for auditing the finances of the organization and reporting to the governing bodies. The external auditors are selected from among member states’ supreme audit institutions and are members of the U.N. Panel of External Auditors. For example, the external auditor for the International Labor Organization is the Auditor General of the United Kingdom. The Committee, with 16 members chosen by the General Assembly on the basis of broad geographical representation, personal qualifications, and expertise, advises and reports to the General Assembly. The Chairman serves fulltime. The Committee examines the proposed U.N. program budget; administrative and budgetary matters referred to it, including the financing of peacekeeping operations and extrabudgetary activities; and the auditors’ reports on the United Nations. The Committee meets extensively throughout the year and is assisted by a small secretariat in New York. The Board, consisting of three auditors general of member states, provides external audit oversight functions for the United Nations and separately administered funds and programs. Other auditors general serve as individual external auditors for each of the specialized agencies and the International Atomic Energy Agency. The Panel comprises the above appointed auditors currently from the United Kingdom, Ghana, India, Germany, France, Switzerland, South Africa, and Canada. It meets at least annually to promote best accounting and auditing practices in the U.N. system and undertakes certain related initiatives that are communicated to governing bodies through the Advisory Committee on Administrative and Budgetary Questions and to administrations through the Administrative Committee on Coordination. The Committee is the main subsidiary body of the Economic and Social Council and the General Assembly for planning, programming, and coordinating. It reviews U.N. programs and assists the Economic and Social Council in its coordination functions, including considering the activities and programs of agencies of the United Nations and the U.N. system, systemwide coherence and coordination, and the implementation of important legislative decisions. Its conclusions and recommendations play a role in the adoption of the U.N. program budget by the General Assembly. The Committee has 34 elected members, is based in New York, and meets for 4 to 6 weeks per year. The Commission comprises 15 independent experts appointed in their personal capacities by the General Assembly. The Chairman and Vice Chairman serve full time. The Commission makes recommendations to the General Assembly for the regulation and coordination of conditions of service within the U.N. common system and has certain decision-making functions regarding salaries, allowances, and job classification standards. It meets twice yearly for about 3 weeks each time and is serviced by a secretariat in New York. The Unit comprises 11 inspectors from different member states who serve in their personal capacities. They are chosen by the General Assembly on the basis of membership in national supervision or inspection bodies or similar competence. They review matters bearing on the efficiency of the services and proper use of funds and seek to improve management, methods, and coordination through inspection and evaluation. The Unit provides reports with recommendations to the United Nations and its funds and programs and specialized agencies. OIOS consists of four operational units—the Audit and Management Consulting Division, the Investigations Section, the Central Monitoring and Inspection Unit, and the Central Evaluation Unit. It also has an office of the Under Secretary General (USG) for Internal Oversight Services. The Audit and Management Consulting Division is headed by a Director and a Deputy Director. The other three OIOS units are headed by section or unit chiefs who, like the Director of the audit division, report directly to the USG for Internal Oversight Services. Figure II.1 shows the organizational structure for OIOS. Table II.1 provides an overview of each unit’s function, its budget, and staffing. USG for Internal Oversight Services (Geneva) (Geneva) (Nairobi) The Office provides overall direction, supervision, and management of the activities of OIOS. It is also responsible for the planning and monitoring of the work program of OIOS as well as for providing administrative support. The Division provides comprehensive audit services for all U.N. activities for which the Secretary General has administrative responsibility. These audits should promote reliability of information; compliance with policies, regulations, rules, and procedures; the safeguarding of assets; the economical, efficient, and effective use of resources (value for money); and the accomplishment of established objectives and goals for operations and programs. The Section investigates reports of violations of U.N. regulations, rules, and pertinent administrative issuances and transmits to the Secretary General the results of such investigations, together with appropriate recommendations, to guide the Secretary General in deciding on jurisdictional or disciplinary action to be taken. The Unit’s role is to (1) enhance and strengthen the management of programs and ensure that monitoring and self-evaluation functions in each organizational unit of the Secretariat are viewed as an integral part of management oversight responsibility for the efficiency and effectiveness of program performance; (2) provide support to managers in establishing a proper system of program monitoring, including the development of performance indicators and the analytical assessment of performance; (3) provide necessary analytical and transparent information on actual program performance to intergovernmental bodies; and (4) undertake quick analyses for the identification of problems affecting the efficient implementation of programmed activities and recommend corrective measures as appropriate. The Unit determines, as systematically and objectively as possible, the relevance, efficiency, effectiveness, and impact of U.N. activities in relation to their objectives, to enable the Secretariat and member states to make informed decisions about the continuation of these activities. Prior to the creation of OIOS in July 1994, the United States and other member states, as well as the U.S. Congress and GAO, had expressed concern about the way the United Nations managed its resources and criticized the inadequacies of preexisting internal oversight mechanisms. In response to these concerns, the Secretary General established the Office for Inspections and Investigations in August 1993 under the leadership of an Assistant Secretary General. However, member states—primarily the United States—wanted a more autonomous oversight body with more authority. In November 1993, the U.S. Permanent Representative to the United Nations proposed the establishment of an “Office of the Inspector General” to the General Assembly. According to the proposal, the office would support member states and the Secretary General by providing independent advice based on an examination of all activities carried out at all U.N. headquarters and field locations financed from the regular budget, peacekeeping budgets, and voluntary contributions. At the same time, the new office would have external reporting responsibilities. The office would be headed by an “Inspector General” (IG) who, although an integral part of the Secretariat, would carry out his/her responsibilities entirely independent of the Secretariat and all U.N. governing bodies. In April 1994, Congress enacted Public Law 103-236 (sec. 401(b)) which, among other things, emphasized the importance of establishing such an office. The legislation required certain funds to be withheld from the United Nations until the President certified that it had established an independent office to conduct and supervise objective audits, investigations, and inspections relating to the programs and operations of the United Nations. The legislation stated that the office should have (1) access to all records and documents; (2) procedures to ensure compliance with recommendations of the office; and (3) procedures to protect the identity of, and to prevent reprisals against, any staff members making a complaint or disclosing information, or cooperating in any investigation or inspection by the office. After a series of negotiations among member states, including the United States, a compromise was reached, and the General Assembly, in July 1994, approved a resolution creating OIOS within the U.N. Secretariat. OIOS’ mandate reflects many of the characteristics of U.S. inspector general offices. Table III.1 provides a comparison of U.S. offices of inspectors general and OIOS. Provide a means for keeping the head of the agency and the Congress fully and currently informed about problems and deficiencies in programs and operations. Assist the Secretary General in fulfilling his internal oversight responsibilities relating to resources and staff of the organization, including separately administered organizations of the United Nations. Neither the head of the agency nor the officer next in rank below the head shall prevent or prohibit the IG from initiating, carrying out, or completing any audit or investigation. Exercise operational independence to initiate, carry out, and report on any action that OIOS considers necessary to fulfill its responsibilities. OIOS may not be prohibited or hindered from carrying out any action within the purview of its mandate. Each IG is authorized to have access to all records, reports, audits, reviews, documents, papers, recommendations, or other material that relate to programs and operations. OIOS staff have the right of access to all persons, records, documents, or other material assets and premises and to obtain such information and explanations they consider necessary to fulfill their responsibilities. IGs, who report to and are under the general supervision of the agency head. USG for Internal Oversight Services, who is under the authority of the Secretary General. IGs are appointed solely on the basis of integrity and demonstrated ability in accounting, auditing, financial analysis, law, management analysis, public administration, or investigations. The USG for Internal Oversight Services shall be an expert in the fields of accounting, auditing, financial analysis and investigations, management, law, or public administration. The President appoints 27 IGs with the advice and consent of the Senate. Agency heads appoint 30 IGs. The USG for Internal Oversight Services shall be appointed by the Secretary General, following consultations with member states, and approved by the General Assembly. IGs are appointed without regard to political affiliation. The Secretary General shall appoint the USG for Internal Oversight Services with due regard for geographic rotation. Most IGs have no fixed term of service. The USG for Internal Oversight Services shall serve for one fixed term of 5 years without possibility of renewal. An IG appointed by the President and confirmed by the Senate may be removed from office by the President. Likewise, the agency heads who appoint IGs may also remove them. However, for all IGs, the reasons for such removal shall be communicated to the Congress. The USG for Internal Oversight Services may be removed by the Secretary General only for cause and with the approval of the General Assembly. IGs appointed by the President have separate line item accounts in their agencies’ budgets. Agency-appointed IGs’ offices are financed with funds that are available for other agency activities. OIOS budget proposals are submitted to the Secretary General, who submits proposals to the General Assembly for its consideration and approval, taking into account the office’s independence in the exercise of its functions. OIOS is a separate line item in the U.N. Secretariat’s budget. (continued) Each IG gives particular regard to the activities of the Comptroller General, with a view toward avoiding duplication and ensuring effective coordination and cooperation. OIOS shall coordinate its activities and provide the Board of Auditors and the Joint Inspection Unit with OIOS reports that have been submitted to the Secretary General and the comments of the Secretary General on them. Each IG shall comply with audit standards established by the Comptroller General. Also, the IGs have established quality standards for investigations and inspections. There is no requirement that OIOS establish guidelines and standards appropriate to the United Nations for any of its functions. Create independent and objective units to conduct and supervise audits and promote economy, efficiency, and effectiveness of programs and operations. Examine, review, and appraise the use of financial resources. Ascertain compliance of program managers with financial and administrative regulations and rules. Undertake management audits, reviews, and surveys. Monitor the effectiveness of internal control systems. Create independent and objective units to conduct and supervise investigations of programs and operations. Prevent and detect fraud and abuse in programs and operations. Investigate reports of violations of U.N. regulations, rules, and administrative documents. Assess the potential within program areas for fraud and other violations through the analysis of systems of control. While inspection is not specifically mandated by law, many IGs perform a similar function. Conduct inspections of organizational units whenever there are indications that programs are not adequately managed or executed and that resources are not being efficiently used. While monitoring is not specifically mandated by law, many IGs perform a similar function. Monitor program implementation and ensure that monitoring is viewed as managerial responsibility. While evaluation is not specifically mandated by law, many IGs perform a similar function. Conduct evaluations of U.N. programs to assess the efficiency and effectiveness of the implementation of programs and legislative mandates. Encourage self-evaluation by program managers and provide them with methodological support. Each IG is required to keep the head of the agency and the Congress fully and currently informed by means of reports and otherwise. Written reports communicate the results of audits to officials at all levels of government and, unless restricted by law or regulation, copies should be made available for public inspection. OIOS shall submit reports that provide insight into the effective use and management of resources and the protection of assets to the Secretary General, who shall ensure that all such reports are made available to the General Assembly as presented, together with any separate comments the Secretary General may deem appropriate. Each IG shall prepare semiannual reports summarizing the activities of the IG’s office during the preceding 6-month period. These reports are furnished to the agency head for transmittal to the Congress, together with a report by the agency head. OIOS shall submit an annual analytical and summary report on OIOS activities to the Secretary General, who shall ensure that such reports are made available to the General Assembly as presented, together with any separate comments the Secretary General may deem appropriate. (continued) IGs prepare semiannual reports that include information on the status of audit recommendations. These reports are submitted to agency heads for transmittal to the Congress. OIOS shall report to the Secretary General as necessary, but at least twice yearly, on the implementation of recommendations. Recommendation follow-up Agency heads are responsible for designating a top management official to oversee audit follow-up, including resolution and corrective action. The Secretary General shall facilitate the prompt and effective implementation of OIOS recommendations and inform the General Assembly of actions taken in response to recommendations. IGs are responsible for reviewing responses to audit reports and reporting significant disagreements to the audit follow-up official. The USG for Internal Oversight Services shall report to the Secretary General for a final decision on recommendations with which the program managers concerned do not agree. Each IG shall report expeditiously to the Attorney General whenever the IG has reasonable grounds to believe a federal criminal law has been violated. Disciplinary and/or jurisdictional proceedings are initiated without undue delay in cases where the Secretary General considers it justified. Protection for complainants Unless the IG determines disclosure is unavoidable, the IG shall not disclose the identity of employees who report possible violations of law, gross waste of funds, and abuse of authority without consent. Also, employees are to be protected from reprisal for making a complaint to the IG. The Secretary General is to ensure that procedures are in place to provide for direct confidential access of staff members to OIOS, provide protection against repercussions for staff members who provide information, and protect the anonymity of staff members. The “Inspectors General Vision Statement” states that IGs will work with agency heads and the Congress to improve program management and to build relationships with program managers based on a shared commitment to improving program operations and effectiveness. OIOS may advise program managers on the effective discharge of their responsibilities, provide assistance to program managers in implementing recommendations, ascertain that program managers are given methodological support, and encourage self-evaluation. The USG for Internal Oversight Services shall exercise the degree of latitude and control over OIOS personnel and resources that is necessary to achieve the objectives of the office. Sources of information on U.S. offices of inspectors general: Inspector General Act of 1978, as amended; Quality Standards for Investigations, President’s Council on Integrity and Efficiency, (Washington, D.C.: 1985); Office of Management and Budget Circular A-50, “Audit Follow-up,” revised (Washington, D.C.: Sept. 29, 1992); Quality Standards for Inspections, President’s Council on Integrity and Efficiency (Washington, D.C.: Mar. 1993); Action Needed to Strengthen OIGs at Designated Federal Entities (GAO/AIMD-94-39, Nov. 30, 1993); Inspectors General Vision Statement (Washington, D.C.: Jan. 1994); Government Auditing Standards: 1994 Revision (GAO/OCG-94-4, June 1994). Sources of Information on the U.N. Office of Internal Oversight Services: U.N. General Assembly Resolution 48/218B, July 29, 1994; U.N. Secretary General’s Bulletin ST/SGB/273, September 7, 1994. While the overall regular budget for the U.N. Secretariat has decreased since the 1994-95 biennium, OIOS’ budget increased by 55 percent. Five of the 12 other functions had budget decreases over the period. Concerning staffing, while several U.N. Secretariat functions had staffing increases from the 1994-95 to the 1996-97 biennium, all except OIOS experienced staff decreases in the proposed 1998-99 biennium budget. Tables IV.1 and IV.2 provide a comparison of OIOS’ budget and staffing levels with other U.N. Secretariat functions, respectively. Mark C. Speight The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the operations of the United Nations (U.N.) Office of Internal Oversight Services (OIOS), focusing on whether OIOS: (1) is operationally independent; (2) has the necessary resources to carry out its mission; and (3) has written policies and procedures in place for conducting its work, following up on its recommendations, and providing confidentiality to informants and protecting whistleblowers from possible reprisal. GAO noted that its lack of direct audit authority resulted in certain limitations and restricted its ability to fully address the review objectives. GAO noted that: (1) OIOS is the internal oversight mechanism for the U.N. Secretary General; (2) although OIOS had some start-up and early operational problems, many of these seem to have been resolved; (3) this was difficult to do in an organizational environment that operated without effective internal oversight mechanisms for almost half a century; (4) in less than 3 years, OIOS has assimilated four preexisting, internal oversight units from the Office for Inspections and Investigations and, for the first time, hired professional investigators and provided other resources for an investigations unit in the United Nations; (5) OIOS' mandate, the Secretary General's Bulletin establishing OIOS, and OIOS' implementing procedures provide the framework for an operationally independent, internal oversight mechanism for the U.N. Secretariat; (6) however, without access to all its audit, inspection, and investigation reports, working papers, and other records and files related to OIOS work, GAO could not test whether OIOS exercised its authority and implemented its procedures in an independent manner; (7) one issue that may affect the appearance of OIOS' independence involves how it has implemented its reporting mechanism; (8) OIOS has provided only 39 of its 162 various reports to the Secretary General and the General Assembly or its committees; (9) initial concerns about inadequate budget and staff levels have been addressed; (10) since its establishment, OIOS' regular U.N. budget has increased from $12 million to $18.6 million (proposed for 1998-99), and its authorized positions have increased by 18, to a total of 123; (11) OIOS' audit division and the Investigations Section have developed written auditing and investigative policies and procedures; (12) however, the Central Monitoring and Inspection Unit and the Central Evaluation Unit do not have comparable manuals; (13) each OIOS unit tracks its recommendations and is responsible for determining when they should be closed out; (14) in its 1995 and 1996 annual reports, the Under Secretary General (USG) for Internal Oversight Services estimated OIOS had identified $35.5 million in potential recoveries and realized $19.8 million in savings and recoveries; (15) OIOS' Investigations Section has established procedures and developed guidance, which it has publicized throughout the United Nations, for ensuring informants' confidentiality and protecting whistleblowers from reprisal; and (16) in discussions with the USG for Internal Oversight Services, GAO suggested several ways to enhance OIOS' future operations. |
Since September 11, 2001, there has been broad acknowledgment by the federal government, state and local governments, and a range of independent research organizations of the need for a coordinated intergovernmental approach to allocating the nation’s resources to address the threat of terrorism and improve our security. This coordinated approach includes developing national guidelines and standards and monitoring and assessing preparedness against those standards to effectively manage risk. The National Strategy for Homeland Security (National Strategy), released in 2002 following the proposal for DHS, emphasized a shared national responsibility for security involving close cooperation among all levels of government and acknowledged the complexity of developing a coordinated approach within our federal system of government and among a broad range of organizations and institutions involved in homeland security. The national strategy highlighted the challenge of developing complementary systems that avoid unintended duplication and increase collaboration and coordination so that public and private resources are better aligned for homeland security. The national strategy established a framework for this approach by identifying critical mission areas with intergovernmental initiatives in each area. For example, the strategy identified such initiatives as modifying federal grant requirements and consolidating funding sources to state and local governments. The strategy further recognized the importance of assessing the capability of state and local governments, developing plans, and establishing standards and performance measures to achieve national preparedness goals. Recent reports by independent research organizations have highlighted the same issues of the need for intergovernmental coordination, planning, and assessment. For example, the fifth annual report of the Advisory Panel to Assess Domestic Response Capabilities for Terrorism Involving Weapons of Mass Destruction (the Gilmore Commission) also emphasizes the importance of a comprehensive, collaborative approach to improve the nation’s preparedness. The report states that there is a need for a coordinated system for the development, delivery, and administration of programs that engage a broad range of stakeholders. The Gilmore Commission notes that preparedness for combating terrorism requires measurable demonstrated capacity by communities, states, and the private sector to respond to threats with well-planned, well-coordinated, and effective efforts by all participants. The Gilmore Commission recommends a comprehensive process for establishing training and exercise standards for responders that includes state and local response organizations on an ongoing basis. The National Academy of Public Administration’s recent panel report also notes the importance of coordinated and integrated efforts at all levels of government and in the private sector to develop a national approach to homeland security. Regarding assessment, the report recommends establishing national standards in selected areas and developing impact and outcome measures for those standards. The creation of DHS was an initial step toward reorganizing the federal government to respond to some of the intergovernmental challenges identified in the national strategy. The reorganization consolidated 22 agencies with responsibility for domestic preparedness functions to, among other things, enhance the ability of the nation’s police, fire, and other first responders to respond to terrorism and other emergencies through grants. Many aspects of DHS’s success depend on its maintaining and enhancing working relationships within the intergovernmental system as the department relies on state and local governments to accomplish its mission. The Homeland Security Act contains provisions intended to foster coordination among levels of government, such as the creation of the Office of State and Local Government Coordination and ONCRC. The Homeland Security Act established ONCRC within DHS to oversee and coordinate federal programs for, and relationships with, state, local, and regional authorities in the National Capital Region. Pursuant to the act, ONCRC’s responsibilities include coordinating the activities of DHS relating to NCR, including cooperating with the Office for State and Local Government Coordination; assessing and advocating for resources needed by state, local, and regional authorities in NCR to implement efforts to secure the homeland; providing state, local, and regional authorities in NCR with regular information, research, and technical support to assist the efforts of state, local, and regional authorities in NCR in securing the homeland; developing a process for receiving meaningful input from state, local, and regional authorities and the private sector in NCR to assist in the development of the federal government’s homeland security plans and activities; coordinating with federal agencies in NCR on terrorism preparedness to ensure adequate planning, information sharing, training, and execution of the federal role in domestic preparedness activities; coordinating with federal, state, and regional agencies and the private sector in NCR on terrorism preparedness to ensure adequate planning, information sharing, training, and execution of domestic preparedness activities among these agencies and entities; and serving as a liaison between the federal government and state, local, and regional authorities, and private sector entities in NCR to facilitate access to federal grants and other programs. The act also requires ONCRC to submit an annual report to Congress that includes the identification of resources required to fully implement homeland security efforts in NCR, an assessment of the progress made by NCR in implementing homeland security efforts in NCR, and recommendations to Congress regarding the additional resources needed to fully implement homeland security efforts in NCR. The first ONCRC Director served from March to November 2003, and the Secretary of DHS appointed a new Director on April 30, 2004. The ONCRC has a small staff including full-time and contract employees and staff on detail to the office. NCR is a complex multijurisdictional area comprising the District of Columbia and surrounding counties and cities in the states of Maryland and Virginia and is home to the federal government, many national landmarks, and military installations. Coordination within this region presents the challenge of working with eight NCR jurisdictions that vary in size, political organization, and experience with managing emergencies. The largest municipality in the region is the District of Columbia, with a population of about 572,000. However, the region also includes large counties, such as Montgomery County, Maryland, with a total population of about 873,000, incorporating 19 municipalities, and Fairfax County, Virginia, the most populous jurisdiction (about 984,000), which is composed of nine districts. NCR also includes smaller jurisdictions, such as Loudoun County and the City of Alexandria, each with a population below 200,000. The region has significant experience with emergencies, including natural disasters such as hurricanes, tornadoes, and blizzards, and terrorist incidents such as the attacks of September 11, and subsequent events, and the sniper incidents of the fall of 2002. For more details on the characteristics of the individual jurisdictions, see table 1. In fiscal years 2002 and 2003, Congress provided billions of dollars in grants to state and local governments to enhance the ability of the nation’s first responders to prevent and respond to terrorism events. We reviewed 16 of the funding sources available for use by first responders and emergency managers that were targeted for improving preparedness for terrorism and other emergencies. In fiscal years 2002 and 2003, these grant programs, administered by DHS, Health and Human Services (HHS), and Justice awarded about $340 million to the District of Columbia, Maryland, Virginia, and state and local emergency management, law enforcement, fire departments, and other emergency response agencies in NCR. Table 2 shows the individual grant awards to the jurisdictions. The funding sources we reviewed include a range of grants that can be used for broad purposes, such as ODP’s State Homeland Security Grant Program and the Federal Emergency Management Agency (FEMA) Emergency Management Performance Grant, as well as more targeted grants for specific disciplines such as FEMA’s Assistance to Firefighters Grant and HHS’s Bioterrorism Preparedness Grants. While some of these grants are targeted to different recipients, many of them can be used to fund similar projects and purposes. For example, there are multiple grants that can be used to fund equipment, training, and exercises. We have previously reported the fragmented delivery of federal assistance can complicate coordination and integration of services and planning at state and local levels. Multiple fragmented grant programs can create a confusing and administratively burdensome process for state and local officials seeking to use federal resources for homeland security needs. In addition, many of these grant programs have separate administrative requirements such as applications and different funding and reporting requirements. In fiscal year 2004, in an effort to reduce the multiplicity of separate funding sources and to allow greater flexibility in the use of grants, several ODP State and Local Domestic Preparedness grants, which were targeted for separate purposes such as equipment, training, and exercises, were consolidated into a single funding source and renamed the State Homeland Security Grant Program. In addition, four FEMA grants (Citizen Corps, Community Emergency Response Teams, Emergency Operations Centers, and State and Local All-Hazards Emergency Operations Planning) now have a joint application process; the same program office at FEMA administers these grants. Overall, NCR jurisdictions used the 16 funding sources we reviewed to address a wide variety of emergency preparedness activities such as (1) purchasing equipment and supplies; (2) training first responders; (3) planning, conducting, and evaluating exercises; (4) planning and administration; and (5) providing technical assistance. Table 3 shows the eligible uses for each of the 16 grants. Of the $340 million awarded for the 16 funding sources, the two largest funding sources—which collectively provided about $290.5 million (85 percent) in federal funding to NCR—were the Fiscal Year 2002 Department of Defense (DOD) Emergency Supplemental Appropriation and the Fiscal Year 2003 Urban Area Security Initiative. Both of these sources fund a range of purposes and activities such as equipment purchases, including communications systems; training and exercises; technical assistance; and planning. The Fiscal Year 2002 DOD Emergency Supplemental Appropriation, which was provided in response to the attacks of September 11, 2001, provided approximately $230 million to enhance emergency preparedness. Individual NCR jurisdictions independently decided how to use these dollars and used them to fund a wide array of purchases to support first responders and emergency management agencies. Our review of the budgets for this appropriation submitted by NCR jurisdictions showed that many of these grant funds were budgeted for communications equipment and other equipment and supplies. Table 4 provides examples of major projects funded by each jurisdiction with these funds. In 2003, DHS announced a new source of funding targeted to large urban areas under UASI to enhance the ability of metropolitan areas to prepare for and respond to threats or incidents of terrorism. This initiative included a total of $60.5 million to NCR, which was one of seven metropolitan areas included in the initial round of funding. The cities were chosen by applying a formula based on a combination of factors, including population density, critical infrastructure, and threat/vulnerability assessment. UASI’s strategy for NCR includes plans to fund 21 individual lines of effort for the region in the areas of planning, training, exercises, and equipment. In addition, funds are provided for administration and planning and to reimburse localities for changing levels of homeland security threat alerts. Table 5 summarizes the planned use of the UASI funds. Effectively managing first responder federal grant funds requires the ability to measure progress and provide accountability for the use of public funds. As with other major policy areas, demonstrating the results of homeland security efforts includes developing and implementing strategies, establishing baselines, developing and implementing performance goals and data quality standards, collecting reliable data, analyzing the data, assessing the results, and taking action based on the results. This strategic approach to homeland security includes identifying threats and managing risks, aligning resources to address them, and assessing progress in preparing for those threats and risks. Without a NCR baseline on emergency preparedness, a plan for prioritizing expenditures and assessing their benefits, and reliable information on funds available and spent on first responder needs in NCR, it is difficult for ONCRC to fulfill its statutory responsibility to oversee and coordinate federal programs and domestic preparedness initiatives for state, local, and regional authorities in NCR. Regarding first responders, the purpose of these efforts is to be able to address three basic, but difficult, questions: “For what types of threats and emergencies should first responders be prepared?” “What is required— coordination, equipment, training, etc.—to be prepared for these threats and emergencies?” “How do first responders know that they have met their preparedness goals?” NCR is an example of the difficulties of answering the second and third questions in particular. ONCRC and its jurisdictions face three interrelated challenges that limit their ability to jointly manage federal funds in a way that demonstrates increased first responder capacities and preparedness while minimizing inefficiency and unnecessary duplication of expenditures. First and most fundamental are the lack of preparedness standards and a baseline assessment of existing NCR-wide first responder capacities that is linked to those standards. As in other areas of the nation generally, NCR does not have a set of accepted benchmarks (best practices) and performance goals that could be used to identify desired goals and determine whether first responders have the ability to respond to threats and emergencies with well-planned, well-coordinated, and effective efforts that involve police, fire, emergency medical, public health, and other personnel from multiple jurisdictions. The Gilmore Commission’s most recent report noted that there is a continuing problem of a lack of clear guidance from the federal level about the definition and objectives of preparedness, a process to implement those objectives, and how states and localities will be evaluated in meeting those objectives. The report states the need for a coordinated system for the development, delivery, and administration of programs that engages a broad range of stakeholders. Over the past few years, some state and local officials and independent research organizations have expressed an interest in some type of performance standards or goals that could be used as guidelines for measuring the quality and level of first responder preparedness, including key gaps. However, in discussing “standards” for first responders, it is useful to distinguish between three different types of measures that are often lumped together in the discussion of standards. Functional standards generally set up to measure such things as functionality, quantity, weight, and extent and in the context of first responders generally apply to equipment. Examples include the number of gallons of water per minute that a fire truck can deliver or the ability of a biohazard suit to filter out specific pathogens, such as anthrax. Benchmarks are products, services, or work processes that are generally recognized as representing best practices for the purposes of organizational improvement. An example might be joint training of fire and police for biohazard response—a means of achieving a specific performance goal for responding to biohazard threats and incidents. Performance goals are measurable objectives against which actual achievement may be compared. An example might be the number of persons per hour who could be decontaminated after a chemical attack. Realistic training exercises could then be used to test the ability to meet that objective. Homeland security standards should include both functional standards and performance goals. In February 2004, DHS adopted its first set of functional standards for protective equipment. The eight standards, previously developed by the National Institute for Occupational Safety and Health (NIOSH) and the National Fire Protection Association (NFPA), are intended to provide minimum requirements for equipment. These standards include NIOSH standards for three main categories of chemical, biological, radiological, and nuclear (CBRN) respiratory protection equipment and five NFPA standards for protective suits and clothing to be used in responding to chemical, biological, and radiological attacks. Performance and readiness standards are more complicated and difficult to develop than functional standards. In a large, diverse nation, not all regions of the nation require exactly the same level of preparedness because, for example, not all areas of the nation face the same types and levels of risks and, thus, first responder challenges. For example, first responder performance goals and needs are likely to be different in New York City and Hudson, New York. Thus, different levels of performance goals may be needed for different types and levels of risk. Recently, the administration has focused more attention on the development of homeland security standards, including the more difficult performance goals or standards. For example, DHS’s recently issued strategic plan makes reference to establishing, implementing, and evaluating capabilities through a system of national standards. Homeland Security Presidential Directive 8 (December 2003) requires the development of a national preparedness goal to include readiness metrics and a system for assessing the nation’s overall preparedness by the fiscal year 2006 budget submission. The lack of benchmarks and performance goals may contribute to difficulties in meeting the second challenge in NCR—developing a coordinated regionwide plan for determining how to spend federal funds received and assess the benefit of that spending. A strategic plan for the use of homeland security funds—whether in NCR or elsewhere—should be based on established priorities, goals, and measures and align spending plans with those priorities and goals. At the time of our review, such a strategic plan had yet to be developed. Although ONCRC had developed a regional spending plan for the UASI grants, this plan was not part of a broader coordinated plan for spending federal grant funds and developing first responder capacity and preparedness in NCR. The former ONCRC Director said that ONCRC and the Senior Policy Group could have a greater role in overseeing the use of other homeland security funds in the future. There is no established process or means for regularly and reliably collecting and reporting data on the amount of federal funds available to first responders in each of NCR’s eight jurisdictions, the planned and actual use of those funds, and the criteria used to determine how the funds would be spent. Reliable data are needed to establish accountability, analyze gaps, and assess progress toward meeting established performance goals. Credible data should also be used to develop and revise plans and to set goals during the planning process. Were these data available, the lack of standards against which to evaluate the data would make it difficult to assess gaps. It should be noted that the fragmented nature of the multiple federal grants available to first responders—some awarded to states, some to localities, some directly to first responder agencies—may make it more difficult to collect and maintain regionwide data on the grant funds received and the use of those funds in NCR. Our previous work suggests that this fragmentation in federal grants may reinforce state and local fragmentation and can also make it more difficult to coordinate and use those multiple sources of funds to achieve specific objectives. NCR jurisdictions completed the Office for Domestic Preparedness State Homeland Security Assessment (ODP assessment) in the summer of 2003. At the time of our review, NCR jurisdictions said that they had not received any feedback from ODP or ONCRC on the review of those assessments. Preparedness expectations should be established based on likely threat and risk scenarios and an analysis of the gap between current and needed capabilities based on national guidelines. In keeping with the requirement of the Homeland Security Act that DHS conduct an assessment of threats and state and local response capabilities, risks, and needs with regard to terrorist incidents, DHS developed the ODP State Homeland Security Assessment and Strategy Program. The ODP assessment was aligned with the six critical mission areas in the National Strategy for Homeland Security, and generally followed the structure of a risk management approach. The assessment used the same scenarios for all jurisdictions nationwide, allowing ODP to compare different jurisdictions using the same set of facts and assumptions. Of course, the scenarios used may not be equally applicable to all jurisdictions nationwide. The assessment collected data in three major areas: risk, capability, and needs related to terrorism prevention. The risk assessment portion includes threat and vulnerability assessments. The capability assessment includes discipline-specific tasks for weapons of mass destruction (WMD) events. The needs assessment portion covers five functional areas of planning, organization, equipment, training, and exercises. Supporting materials and worksheets on a threat profile, capability to respond to specific WMD, an equipment inventory, and training needs are provided to assist local jurisdictions in completing the assessment. A feedback loop is a key part of a risk management process. It involves evaluating the assessment results to inform decision making and establish priorities; it is not clear how the results of the assessment were used to complete this process for NCR. ONCRC did not present any formal analysis of the gap in capabilities identified by the assessment, and several NCR jurisdictions said they did not receive any feedback on the results of the assessment for their individual jurisdictions. The former ONCRC Director said that the results of the assessment for each of the NCR jurisdictions were combined to establish priorities and develop the strategy for the use of the UASI funds, but he did not provide any information on how the individual assessments were combined or the methodology used to analyze the assessment results. While the former Director said the results of the assessment were used to develop the plan for the use of the UASI funds within NCR, he said that they were not applied beyond that one funding source to establish priorities for the use of other federal grants. While the NCR jurisdictions had emergency coordination practices and procedures, such as mutual aid agreements, in place long before September 11,2001, the terrorist attacks and subsequent anthrax events in NCR highlighted the need for better coordination and communication within the region. As a result, WashCOG developed a regional emergency coordination plan (RECP) to facilitate coordination and communication for regional incidents or emergencies. While this new plan and the related procedures represent efforts to improve coordination, more comprehensive planning would include a coordinated regional approach for the use of federal homeland security funds. NCR is one of the first regions in the country to prepare a regional emergency coordination plan. The plan is intended to provide structure through which the NCR jurisdictions can collaborate on planning, communication, information sharing, and coordination activities before, during, and after a regional emergency. RECP, which is based on FEMA’s Federal Response Plan, identifies 15 specific regional emergency support functions, including transportation, hazardous materials, and law enforcement. The Regional Incident Communication and Coordination System (RICCS), which is included in the WashCOG plan, provides a system for WashCOG members, the state of Maryland, the Commonwealth of Virginia, the federal government, public agencies, and others to collaborate in planning, communicating, sharing information, and coordinating activities before, during, and after a regional incident or emergency. RICCS relies on multiple means of communication, including conference calling, secure Web sites, and wireless communications. The system has been used on several occasions to notify local officials of such events as a demonstration in downtown Washington, D.C., and the October 2002 sniper incidents. For example, RICCS allowed regional school systems to coordinate with one another regarding closure policies during the sniper events. Our work in NCR found that no regional coordination methods have been developed for planning for the use of 15 of the 16 funding sources we reviewed. While the region has experience with working together for regional emergency preparedness and response, NCR officials told us that they have not worked together to develop plans and coordinate expenditures for the use of federal funds. Most NCR jurisdictions did not have a formal overall plan for the use of these funds within their individual jurisdictions. In addition, while the grant recipients are required to report to the administering federal agencies on each individual grant, DHS and ONCRC have not implemented a process to collect and analyze the information reported for NCR as a whole. The one exception to this lack of coordination is UASI, for which ONCRC developed a regional plan for the use of the funds. Internal control standards support developing documentation, such as plans, to assist in controlling management operations and making decisions. Without this type of documentation, it is difficult for ONCRC to monitor the overall use of funds within NCR and to evaluate their effectiveness and plan for future use of grant funds. While some NCR and ONCRC officials said that there was a need for DHS and the NCR jurisdictions to establish controls over how emergency preparedness grant funds are used in the region, they did not indicate any plans to do so. Within NCR, planning for the use of federal emergency and homeland security grant funds is generally informal and is done separately by each of the NCR jurisdictions. Most of the jurisdictions told us that they have undocumented or informal plans for the uses of the federal grant monies for emergency preparedness activities. Only two jurisdictions have formal written plans that indicate how the jurisdiction would use its federal homeland security grants. NCR states and local jurisdictions had various budgets for uses of emergency preparedness grant funds they received from fiscal year 2002 through fiscal year 2003. However, they did not coordinate with one another in defining their emergency preparedness needs, in developing their budgets, or in using the federal grant funds to avoid unnecessary duplication of equipment and other resources within the region. In general, budgeting for the use of federal emergency preparedness grants was done on a grant-by-grant basis within each jurisdiction and is largely based on requests from first responder and emergency management officials. Budgets indicate how the individual jurisdictions intend to spend funds from a specific grant but do not indicate whether those budgets are based on any strategic plan or set of priorities. One Maryland county developed an overall plan for the use of federal homeland security and emergency preparedness grants. The July 1, 2003, homeland security strategy outlined the priorities for the county in using federal emergency preparedness grant funds. However, it did not specify grants or amounts for each of the initiatives. The priorities for such funding were focused on equipping and training its first responders; conducting exercises and drills for its government employees; training other essential and critical government workers, as well as the citizens and residents of the county; working vigorously to implement recommendations from its Homeland Security Task Force; and solidifying the county’s relationships with other federal, state, and regional homeland security entities. While officials from other NCR jurisdictions do not have a formal plan, some have established a process for reviewing proposals for the use of the homeland security grants. For example, one Northern Virginia jurisdiction recently adopted a planning process in which its Emergency Management Coordination Committee, composed of the county’s senior management team, solicits budget proposals from first responder and emergency management agencies for potential grant funds. This committee then makes funding recommendations based upon a review of these proposals and their funding priorities for the county. Officials from other jurisdictions described similar processes for developing budget proposals, but they have not developed longer-term or comprehensive strategic plans. To determine how the NCR jurisdictions used the funds, we reviewed the use of funds of the Fiscal Year 2002 Department of Defense Supplemental Appropriation, which was the largest source of funding for the period of our review. Each NCR jurisdiction used those funds to buy emergency equipment for first responders. However, officials said they did not coordinate on planning for these expenditures with the other NCR jurisdictions. For example, five of the eight NCR jurisdictions planned to either purchase or upgrade their command vehicles. One of the jurisdictions allocated $310,000 for a police command bus and $350,000 for a fire and rescue command bus; a neighboring jurisdiction allocated $350,000 for a mobile command unit for its fire department; another jurisdiction allocated $500,000 for a police command vehicle replacement; a nearby jurisdiction allocated $149,000 to upgrade its incident command vehicle; and its neighboring jurisdiction allocated $200,000 to modify and upgrade its mobile command van. In another example, four nearby jurisdictions allocated grant funds on hazardous response vehicles or hazardous materials supplies that reflected costs of $155,289 for one jurisdiction’s rapid hazmat unit, $355,000 for a neighboring jurisdiction’s hazardous materials response vehicle, $550,000 for a jurisdiction’s fire and rescue hazmat unit vehicle, and $115,246 for a jurisdiction’s hazardous materials supplies. While such purchases might not be duplicative, discussions among neighboring jurisdictions could have facilitated a plan and determined whether these purchases were necessary or whether the equipment purchased could be shared among the jurisdictions, thereby freeing up grant dollars for other needed, equipment to create greater combined capacity within the region. Maximizing the use of resources entails avoiding unnecessary duplication wherever possible. This requires some discussion and general agreement on priorities, roles, and responsibilities among the jurisdictions. Some NCR and ONCRC officials said they believed the NCR jurisdictions could plan better to share resources and work to prevent redundancy while avoiding gaps in inventory. During our review, NCR jurisdictions and federal grantor agencies could not consistently provide data on the 16 grants and funding sources within the scope of our study, such as award amounts, budgets, and financial records. The individual jurisdictions and ONCRC did not have systems in place to identify and account for all federal grants that can be used to enhance domestic preparedness in NCR and elsewhere. The lack of consistently available budget data for all emergency preparedness and homeland security grants limits the ability to analyze and assess the impact of federal funding and to make management decisions to ensure the effective use of federal grant dollars. There is no central source within each jurisdiction or at the federal level to identify all of the emergency preparedness grants that have been allocated to NCR. At the local level, such information is needed to meet legislative and regulatory reporting requirements for federal grant expenditures of $300,000 or more. In addition, each grant has specific reporting requirements, such as quarterly financial status reports, semiannual program progress reports, and related performance information to comply with the Government Performance and Results Act (P.L. 103-62). Moreover, federal grant financial system guidelines require that federal agencies implement systems that include complete, accurate, and prompt generation and maintenance of financial records and transactions. Those federal system requirements also require timely and efficient access to complete and accurate information, without extraneous material, to internal and external parties that require that information. We asked ONCRC, the Virginia and Maryland emergency management agencies, and the eight NCR jurisdictions for data on the emergency preparedness grants allocated in fiscal years 2002 and 2003. ONCRC could not provide a complete list of grants allocated to the NCR as a whole, and the state emergency management agencies did not provide complete lists of grants for NCR jurisdictions within their respective states. For example, the Maryland Emergency Management Agency (MEMA) provided data on the federal grants for Montgomery and Prince George’s counties that were allocated through the state. MEMA is not required to oversee grants not allocated through the state and, therefore, it did not provide grant data on all of the federal grants provided to the two counties. Similarly, the Virginia Department of Emergency Management (VDEM) did not provide data on all of the grants to the jurisdictions in Virginia. We compiled grant data for the NCR jurisdictions by combining information received from the NCR jurisdictions and the state emergency management agencies. This involved contacting several different budget officials at the NCR jurisdictions and at the state level. The availability of emergency preparedness grant data at the local level also varied by NCR jurisdiction, and complete data were not readily available. After repeated requests for the grant awards, budgets, and plans over a period of 7 months, NCR jurisdictions or the State emergency management agencies provided us with the grant amounts awarded to them during fiscal years 2002 and 2003. Some jurisdictions provided documentation on amounts awarded, but did not provide supporting budget detail for individual grants to substantiate the amounts awarded. Regarding budgets, we obtained a range of information from the NCR jurisdictions. Some jurisdictions provided budget documentation on all the federal grants that were allocated to them; others provided budget documentation on some of their grants; and two did not provide any grant budget documentation. This lack of supporting documentation indicates a lack of financial controls that should be in place to provide accurate and timely data on federal grants. Guidance on financial management practices notes that to effectively evaluate government programs and spending, Congress and other decision makers must have timely, accurate, and reliable financial information on program cost and performance. Moreover, the Comptroller General’s standards for internal control state that “program managers need both operational and financial data to determine whether they are meeting their agencies’ strategic and annual performance plans and meeting their goals for accountability for effective and efficient use of resources.” These standards stress the importance of this information to make operating decisions, monitor performance, and allocate resources and that “pertinent information is identified, captured, and distributed to the right people in sufficient detail, in the right form, and at the appropriate time to enable them to carry out their duties and responsibilities efficiently and effectively.” Having this information could help NCR officials make informed decisions about the use of grant funds in a timely manner. Without national standards, guidance on likely scenarios for which to be prepared, plans, and reliable data, NCR officials assess their gaps in preparedness based on their own judgment. The lack of standards and consistently available data makes it difficult for the NCR officials to use the results of DHS’s ODP assessment to identify the most critical gaps in capacities and to verify the results of the assessment and establish a baseline that could then be used to develop plans to address outstanding needs. Consequently, it is difficult for us or ONCRC to determine what gaps, if any, remain in the emergency response capacities and preparedness within the NCR. Each jurisdiction provided us with information on their perceived gaps and specific needs for improving emergency preparedness. However, there is no consistent method for identifying these gaps among jurisdictions within NCR. Some officials from NCR jurisdictions said that in the absence of a set of national standards, they use the standards and accreditation guidelines for disciplines such as police, fire, hazardous materials, and emergency management in assessing their individual needs. While these standards may provide some general guidance, some NCR officials said that they need more specific guidance from DHS, including information about threats, guidance on how to set priorities, and standards. Some of the jurisdictions reported that they have conducted their own assessments of need based on their knowledge of threat and risk. Officials from other jurisdictions said they have used FEMA’s Local Capability Assessment for Readiness or the hazardous materials assessment to identify areas for improvement. Several jurisdictions told us that they identify remaining gaps based on requests from emergency responder agencies. Other jurisdictions said that they have established emergency management councils or task forces to review their preparedness needs and begin to develop a more strategic plan for funding those needs. Officials of most NCR jurisdictions commonly identified the need for more comprehensive and redundant communications systems and upgraded emergency operations centers. Some officials of NCR jurisdictions also expressed an interest in training exercises for the region as a whole to practice joint response among the Maryland and Virginia jurisdictions and the District of Columbia. DHS and ONCRC appear to have played a limited role in fostering a coordinated approach to the use of federal domestic preparedness funds in NCR. According to the former ONCRC Director, ONCRC has focused its initial coordination efforts on the development of a strategy for the use of the UASI funds of $60.5 million in NCR. However, ONCRC efforts to date have not addressed about $279.5 million in other federal domestic preparedness funding that we reviewed. According to officials from one NCR jurisdiction, they would like additional support and guidance from DHS on setting priorities for the use of federal funds. One of ONCRC’s primary responsibilities is to oversee and coordinate federal programs and domestic preparedness initiatives for state, local, and regional authorities in NCR and to cooperate with and integrate the efforts of elected officials of NCR. ONCRC established a governance structure to receive input from state and local authorities through a Senior Policy Group composed of representatives designated by the Governors of Maryland and Virginia and the Mayor of Washington, D.C. The Senior Policy Group developed the UASI strategy to fund a range of projects that would enhance regional capabilities to improve preparedness and reduce the vulnerability of NCR to terrorist attacks. (See table 5.) According to ONCRC’s former Director, the strategy for UASI was an attempt to force a new paradigm, by developing a regional plan for the use of the funds, with input from outside organizations in addition to representatives from the local jurisdictions. The plan for the $60.5 million allocated funds for projects, including planning, training, equipment, and exercises to benefit the region as a whole, as opposed to allocating funds to meet the individual needs of each NCR jurisdiction separately. The former Director said that funding allocations to these regional projects were based on a summary of the results of the assessment that was completed by each NCR jurisdiction. Officials from NCR state and local jurisdictions expressed mixed opinions on the effectiveness of ONCRC. Officials from a Virginia jurisdiction expressed a need for more guidance on how to set priorities and allocate federal domestic preparedness funding. District of Columbia officials said ONCRC has done a good job of coordination and has been very supportive, given its small staff and the newness of the office. Some noted that ONCRC’s role is still evolving. For example, some officials in one jurisdiction said that ONCRC’s long-term mission has not yet been finalized and ONCRC is still in the process of establishing its role within NCR. The officials believe that ONCRC has significant potential for leading and coordinating homeland security efforts in the region. They recommended that ONCRC become a routine part of regional governance and provide guidance to local governments, focus resources, and enhance the ability of localities to work together to implement homeland security strategies. The officials noted that ONCRC’s efforts were motivated primarily by the leadership of the Director and had not become routine. We discussed NCR officials’ views with the former ONCRC Director. He acknowledged that ONCRC’s initial efforts to coordinate the use of federal grant funds in NCR concentrated on implementing UASI. He said that UASI presented an improvement over previous funding allocations in NCR by allocating funds on a regional basis—rather than jurisdictional perceptions—that considered the results of an assessment of NCR preparedness levels and requirements. The Director said that ONCRC could consider coordinating for other federal programs in addition to UASI, but he did not indicate any concrete plans to do so. The nation’s ongoing vulnerability to terrorist attacks after September 11, 2001, is magnified in NCR because it is the location of critical government infrastructure, national and international institutions, and significant landmarks. In addition to NCR, there are several other high- threat urban areas that share similar vulnerabilities, and improving homeland security is a concern for the entire nation. The challenges faced in NCR a lack of performance standards; baseline information on preparedness and threat and risk scenarios, plans based on those tools, and reliable data to report on the status of initiativesare fundamental obstacles in achieving desired levels of preparedness. Furthermore, NCR’s complex structure requires working with individual political jurisdictions with varying experience in managing homeland security funds and responding to emergencies. This adds to the challenge of developing and implementing a coordinated plan for enhancing first responder capacity. Effective regional and local management of the large amounts of available homeland security funding is an important element in improving our national preparedness. However, it is difficult for regional coordinators and local jurisdictions to avoid duplication and inefficiency in the procurement of goods and services without a knowledge of all the grants that can be leveraged to fight the terror threat; without centralized, standard records to account for the use of those grants; and without a coordinated regional plan for using those funds. It is also difficult to target funding in a way that ensures it is used for goods and services that enhance preparedness and response without current threat information or scenarios and standards that reflect performance goals for preparedness and response. The approach taken in planning for the use of the UASI funds, with its emphasis on regional allocations, is a step toward improved coordination that could provide a more rational and effective method for enhancing emergency preparedness within NCR. In addition, DHS’s recently released strategic plan and the endorsement of standards for equipment represent steps toward addressing some of the challenges noted in this report. However, more needs to be done to develop plans, monitor the use of funds, and assess against goals and standards to evaluate progress toward improved homeland security. To help ensure that emergency preparedness grants and associated funds are managed in a way that maximizes their effectiveness, we recommend that the Secretary of the Department of Homeland Security take the following three actions in order to fulfill the department’s statutory responsibilities in the NCR: work with the NCR jurisdictions to develop a coordinated strategic plan to establish goals and priorities for enhancing first responder capacities that can be used to guide the use of federal emergency preparedness funds; monitor the plan’s implementation to ensure that funds are used in a way that promotes effective expenditures that are not unnecessarily duplicative; and identify and address gaps in emergency preparedness and evaluate the effectiveness of expenditures in meeting those needs by adapting standards and preparedness guidelines based on likely scenarios for NCR and conducting assessments based on them. On April 29, 2004, we provided a draft of this report to the Secretary of DHS and to ONCRC’s Senior Policy Group for comment. On May 19, 2004, we received comments from DHS’s GAO/OIG Liaison and the Senior Policy Group that are reprinted in appendix III and IV, respectively. DHS and the Senior Policy Group generally agreed with our recommendations but also stated that NCR jurisdictions had worked cooperatively together to identify opportunities for synergies and lay a foundation for meeting the challenges noted in the report. DHS and the Senior Policy Group also agreed that there is a need to continue to improve preparedness by developing more specific and improved preparedness standards, clearer performance goals, and an improved method for tracking regional initiatives. In addition, DHS identified the following concerns: DHS stated that the report demonstrated a fundamental misunderstanding regarding homeland security grant programs in NCR and the oversight role and responsibilities of ONCRC. DHS stated that GAO fails to distinguish between funds provided to specific jurisdictions for local priorities and enhancements and funds intended to address regional needs. We disagree. The responsibilities of ONCRC are outlined in the Homeland Security Act and on page 8 of this report. These include activities such as coordinating with federal, state, and regional agencies and the private sector to ensure adequate planning and execution of domestic preparedness activities among these agencies and entities, and assessing and advocating for resources that state, local, and regional authorities in the NCR need to implement efforts to secure the homeland. The responsibilities further require an annual report to Congress that identifies resources required to implement homeland security efforts in NCR, assesses progress made in implementing these efforts, and makes recommendations regarding additional resources needed. In order to fulfill this mandate, ONCRC needs information on how all grant monies have been used, not just those designated specifically for regional purposes, information on how those expenditures have enhanced first responder capacity in the region, and an ability to coordinate all federal domestic preparedness funding sources to NCR. DHS noted that our report recognizes the importance of a coordinated regionwide plan for establishing first responder goals, needs, and priorities and assessing the benefits of all expenditures to enhance first responder capabilities, and our review found that no such coordination methods have been developed. DHS stated that this task is accomplished by the formal NCR Review and Recommendation Process, adopted on February 4, 2004, which ensures coordination of resources among all jurisdictions within NCR. DHS provided us information on this process at our exit conference on April 15, 2004. DHS explained that the Review and Recommendation Process was developed for the UASI program, and ONCRC and NCR officials are in the process of extending it to additional federal programs. While this process could be used to facilitate the development of a regional plan in the future, the process has not included a review of how federal grants have already been used or the development of a coordinated regional plan for establishing needs and priorities and assessing benefits of all federal domestic preparedness programs. Finally, the comments noted a correction to our draft regarding the establishment of the Senior Policy Group, and we have revised the report accordingly. As agreed with your office, unless you release this report earlier, we will not distribute it until 30 days from the date of this letter. At that time, we will send copies to relevant congressional committees and subcommittees, to the Secretary of Homeland Security, to members of the NCR Senior Policy Group, and to other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report or wish to discuss it further, please contact me at (202) 512-8777 or Patricia A. Dalton, Director, (202) 512-6737. Key contributors to this report are listed in appendix V. We met with and obtained documentation from officials of the Department of Homeland Security (DHS), the Federal Emergency Management Agency (FEMA), and the Office for Domestic Preparedness; the Metropolitan Washington Council of Governments (WashCOG); the homeland security advisers and officials from the emergency management agencies for the District of Columbia, Maryland, and Virginia; and first responder officials from the National Capital Region (NCR) jurisdictions, including the District of Columbia; the city of Alexandria; and the counties of Arlington, Fairfax, Loudoun, and Prince William in Virginia; and Montgomery and Prince Georges counties in Maryland. To determine what federal funds have been provided to local jurisdictions for emergency preparedness, for what specific purposes, and from what sources, we met with officials from the DHS’s Office for National Capital Region Coordination (ONCRC), ONCRC’s Senior Policy Group, Federal Emergency Management Agency (FEMA), homeland security advisers for the District of Columbia, Maryland, and Virginia, and first responders from eight jurisdictions within NCR—the District of Columbia; the city of Alexandria; and Arlington, Fairfax, Loudoun, Prince William, Montgomery, and Prince George’s counties. We identified 25 emergency preparedness programs that provided funding to NCR, and we selected 16 for our detailed review. These 16 programs were selected to cover a range of programs, including the largest funding sources; grants provided for general purposes such as equipment and training; and grants provided for specific purposes, such as fire prevention and bioterrorism. We obtained and reviewed the emergency preparedness grant data for the period of October 2001 through September 30, 2003, including grant awards, budgets, and detailed plans for purchases, such as equipment and supplies, communications, and training and exercises. To the extent possible, we independently verified the data we received on funds available and the planned and actual use of those funds by comparing federal, state, and local data sources. Our review revealed the lack of consistent data reported by the jurisdictions in the region and the lack of a central source for such data. For example, NCR state and local jurisdictions vary in their ability to provide budget information on the emergency preparedness and homeland security grants they received. Also, DHS and ONCRC do not have systems in place to account for all federal homeland security and emergency preparedness grants covering their respective jurisdictions. To determine the regional coordination practices and remaining challenges to implementing regional preparedness programs in NCR, we met with officials from WashCOG, DHS, Virginia, Maryland, and local NCR jurisdictions. Oral and documentary evidence obtained from these officials has provided us with an overall perspective on the status of coordination for homeland security within the region and remaining challenges to implementing effective homeland security measures in NCR. We also talked with officials about regional programs that have been successfully implemented in NCR. To determine the gaps that exist in emergency preparedness in NCR, we obtained oral and documentary information from officials of the Metropolitan Washington Council of Governments; DHS; the District of Columbia, Maryland, and Virginia emergency management agencies; homeland security advisers; and local first responders. Our discussions with these officials provide their views of the state of preparedness in NCR. We also obtained information from these officials regarding their plans to address those emergency preparedness gaps. In addition, we reviewed relevant reports, studies, and guidelines to provide context for assessing preparedness. However, there are no uniform standards or criteria by which to measure gaps, and self-reported information from local jurisdictions may not be objective. To determine DHS’s role in enhancing the preparedness of NCR through coordinating the use of federal emergency preparedness grants, assessing preparedness, providing guidance, targeting funds to enhance preparedness, and monitoring the use of those funds, we met with DHS, as well as with state homeland security advisers, state emergency management officials, and local first responders. We obtained and analyzed verbal and documentary evidence on the ODP assessment completed by the NCR jurisdictions, and how that assessment was used, as well as other actions DHS had taken to facilitate homeland security coordination within NCR. Finally, we contacted the District of Columbia Auditor, the Maryland Office of Legislative Audits, and the Virginia Joint Legislative Audit and Review Commission to inform them of our review and determine if the agencies had related past or ongoing work. None of the agencies had conducted or planned to conduct reviews of emergency preparedness or homeland security in the NCR. We conducted our review from June 2003 to February 2004 in accordance with generally accepted government auditing standards. NCR jurisdictions over the years have implemented various mechanisms to ensure planned and coordinated interjurisdictional approaches to the activities of first responders and other public safety professionals. These efforts involve the activities of regional planning and coordinating bodies, such as the Metropolitan Washington Council of Governments (WashCOG), the regional metropolitan planning organization, and mutual assistance agreements between the first responders of neighboring NCR jurisdictions. Planning and coordinating bodies have existed in NCR for many years. WashCOG is a regional entity that includes all the jurisdictions within the region. Other planning and coordinating organizations exist in both Maryland and Virginia. WashCOG is a nonprofit association representing local governments in the District of Columbia, suburban Maryland, and Northern Virginia. Founded in 1957, WashCOG is supported by financial contributions from its 19 participating local governments, federal and state grants and contracts, and donations from foundations and the private sector. WashCOG’s members are the governing officials from local NCR governments, plus area delegation members from Maryland and Virginia legislatures, the U. S. Senate, and the House of Representatives. According to WashCOG, the council provides a focus for action and develops regional responses to such issues as the environment, affordable housing, economic development, health and family concerns, human services, population growth, public safety, and transportation. The full membership, acting through its board of directors, sets WashCOG policies. The National Capital Region Preparedness Council is an advisory body that makes policy recommendations to the board of directors and makes procedural and other recommendations to various regional agencies with emergency preparedness responsibilities or operational response authority. The council also oversees the regional emergency coordination plan. Other regional coordinating bodies exist in the National Capital Region, including the Northern Virginia Regional Commission (NVRC), the Maryland Terrorism Forum, and the Maryland Emergency Management Assistance Compact. NVRC is one of the 21 planning district commissions in Virginia. A 42-member board of commissioners composed of elected officials and citizen representatives all appointed by 14 member localities establishes NVRC’s programs and policies. The commission is supported by annual contributions from its member local governments, by appropriations of the Virginia General Assembly, and by grants from federal and state governments and private foundations. According to a NVRC official, the commission established an emergency management council to coordinate programs, funding issues, and equipment needs. The emergency management council is composed of local chief administrative officers, fire chiefs, police chiefs, and public works managers. In 1998, the Governor of Maryland established the Maryland Terrorism Forum to prepare the state to respond to acts of terrorism, especially those involving weapons of mass destruction. The forum also serves as the key means of integrating all services within federal, state, and local entities as well as key private organizations. The forum’s executive committee, composed of agency directors and cabinet members, provides policy guidance and recommendations to the steering committee; which addresses policy concerns. According to Maryland Emergency Management Agency (MEMA) officials, the forum’s first focus was on planning in terms of equipment interoperability; evacuation planning; and commonality of standards, procedures, and vocabulary. The forum is in the process of hiring a full-time planner for preparedness assessment and strategic planning for the region. The terrorist attacks in New York City and on the Pentagon on September 11, 2001, security preparations during the World Bank demonstrations, and the sniper incidents in the summer and fall of 2002 highlighted the need for enhanced mutual cooperation and aid in responding to emergencies. Several NCR jurisdiction public safety officials told us that mutual aid agreements have worked well and are examples of regional programs that have been successfully implemented in NCR. Mutual aid agreements provide a structure for assistance and for sharing resources among jurisdictions in preparing for and responding to emergencies and disasters. Because individual jurisdictions may not have all the resources they need to acquire equipment and respond to all types of emergencies and disasters, these agreements allow for resources to be regionally distributed and quickly deployed. These agreements provide opportunities for state and local governments to share services, personnel, supplies, and equipment. Mutual aid agreements can be both formal and informal and provide cooperative planning, training, and exercises in preparation for emergencies and disasters. For over 40 years, jurisdictions in the National Capital Region have been supporting one another through mutual aid agreements. According to a WashCOG official, the agency has brokered and facilitated most of these agreements and acts as an informal secretariat for mutual aid issues. According to WashCOG, there are currently 21 mutual aid agreements in force among one or more of the 18 member jurisdictions, covering one or more issues. These can be as broad as a police services support agreement among 12 jurisdictions and as restricted as a two-party agreement relating to control over the Woodrow Wilson Bridge. In September 2001, for example, WashCOG member jurisdictions developed planning guidance for health system response to a bioterrorism event in NCR. The purpose of this guidance is to strengthen the health care response systems allowing them to, among other things, improve early recognition and provide mass care. According to WashCOG, the planning guidance was developed through the cooperative effort of more than 225 individuals representing key government and private elements with NCR that would likely be involved should such an event occur. The Maryland Emergency Management Assistance Compact is a mutual aid compact established to help Maryland’s local jurisdictions support one another with their resources during emergencies and disasters and facilitate efficient operational procedures. The compact establishes partnerships among local jurisdictions so that resources can be requested and provided in response to emergencies and disasters. In addition to helping local governments and their emergency response agencies develop risk management decisions, the compact provides a framework that will increase accessibility for maximum compensation in federally declared disasters. The compact, established by legislation in June 2002, is modeled after the Emergency Management Assistance Compact with 48 states and two U.S. territories participating in interstate mutual aid. In addition to those mentioned above, Ernie Hazera and Amelia Shachoy (Strategic Issues) and Wendy Johnson, Jack Bagnulo, David Brown, R. Rochelle Burns (Homeland Security and Justice) made key contributions to this report. | Since the tragic events of September 11, 2001, the National Capital Region (NCR), comprising jurisdictions including the District of Columbia and surrounding jurisdictions in Maryland and Virginia, has been recognized as a significant potential target for terrorism. GAO was asked to report on (1) what federal funds have been allocated to NCR jurisdictions for emergency preparedness; (2) what challenges exist within NCR to organizing and implementing efficient and effective regional preparedness programs; (3) what gaps, if any, remain in the emergency preparedness of NCR; and (4) what has been the role of the Department of Homeland Security (DHS) in NCR to date. In fiscal years 2002 and 2003, grant programs administered by the Departments of Homeland Security, Health and Human Services, and Justice awarded about $340 million to eight NCR jurisdictions to enhance emergency preparedness. Of this total, the Office for National Capital Region Coordination (ONCRC) targeted all of the $60.5 million Urban Area Security Initiative funds for projects designed to benefit NCR as a whole. However, there was no coordinated regionwide plan for spending the remaining funds (about $279.5 million). Local jurisdictions determined the spending priorities for these funds and reported using them for emergency communications and personal protective equipment and other purchases. NCR faces several challenges in organizing and implementing efficient and effective regional preparedness programs, including the lack of a coordinated strategic plan for enhancing NCR preparedness, performance standards, and a reliable, central source of data on funds available and the purposes for which they were spent. Without these basic elements, it is difficult to assess first responder capacities, identify first responder funding priorities for NCR, and evaluate the effectiveness of the use of federal funds in enhancing first responder capacities and preparedness in a way that maximizes their effectiveness in improving homeland security. |
There are five major federally authorized projects comprised of more than 350 miles of levees, floodwalls, and other flood control structures across six parishes that provide hurricane protection in southeastern Louisiana. While construction of hurricane protection projects in southeastern Louisiana began almost 60 years ago, construction of three major projects began about 40 years ago in the 1960s. Segments of those were still incomplete when Hurricane Katrina struck the area in late August 2005. The projects were designed to provide protection from hurricanes with maximum wind speeds of 87 to 115 miles per hour (115 miles per hour being roughly equivalent to a Category 3 hurricane). Hurricane Katrina made landfall with wind speeds equivalent to a Category 3 hurricane, or winds up to 127 miles per hour, and record high storm surge. To determine the extent of the damage to levees and floodwalls caused by Hurricane Katrina, the Corps contracted for an initial assessment in September 2005 and a second assessment in April 2006. Both assessments were based on visual inspections of the levees and floodwalls. For the first assessment, engineers walked the levees and floodwalls in Orleans, Plaquemines and St. Bernard parishes and looked for damage. The second assessment reexamined only those sections that were initially reported to be undamaged. The first assessment found 169 miles of damaged levees and floodwalls of which 128 miles were moderately damaged and 41 were severely damaged or destroyed. Most of the damage was found in Plaquemines Parish where 150 miles of levees and floodwalls were damaged. The second assessment of those sections initially found to be undamaged found additional cracks in the levees, soil erosion near floodwalls, and levee heights that had settled below their design elevation. Subsequently, the Corps and the contractors conducted sampling and other tests to determine the extent of the damage, but this was only done where exterior damage—such as cracks, depressions, or seepage—was observed. Both assessments documented obvious external damage but did not indicate whether other structures without visible damage—but similar in design and composition to damaged levees and floodwalls—were, in fact, damaged or weakened. In its May 2006 draft final report, an independent team sponsored by the National Science Foundation reviewed the failures of the hurricane protection projects and concluded that the pervasiveness of problems and failures calls into question the integrity and reliability of other sections of flood protection projects that did not fail during Hurricane Katrina. In its June 2006 draft final report, the Interagency Performance Evaluation Task Force—a team of 150 experts from the Corps and about 50 federal, state, international, academic, and industrial organizations—found that repaired sections of levees and floodwalls were likely the strongest parts of the system until remaining sections could be similarly upgraded and completed. The task force report concluded that since there are many areas where protection levels are only the same as before Hurricane Katrina, the New Orleans metropolitan area remained vulnerable to storm surge and wave conditions equivalent to or greater than Hurricane Katrina. The most severely damaged portions of the hurricane protection projects in the area were found in the three parishes of Orleans, Plaquemines, and St. Bernard. Within these three parishes, there are approximately 243 miles of earthen levees and 26 miles of floodwalls. The 26 miles of floodwalls comprised 19 miles of I-walls and 7 miles of T-walls. I-walls are vertical concrete barriers anchored to levees by steel sheet pile driven vertically into the levees. T-walls are vertical concrete barriers with a horizontal concrete base anchored by multiple steel beams driven diagonally into the levees and are stronger than I-walls (see fig. 1). Corps officials told us that T- or L-walls will be constructed to replace floodwalls that were destroyed and need to be replaced. Section 5 of the Flood Control Act of 1941, as amended, commonly referred to as Public Law 84-99, authorizes the Corps to conduct emergency operations and rehabilitation activities when levees fail or are damaged during storms. Under the implementing regulations for Public Law 84-99, after a storm, the Corps may repair and restore federally authorized flood control projects and hurricane protection structures, or nonfederal flood control projects that were inspected and found to have met federal standards for construction and maintenance prior to the flood event. Assistance for the rehabilitation of hurricane protection structures is limited to repair or restoration to the prestorm condition and level of protection (e.g., the prestorm elevation/height of levees, allowing for normal settlement). Under Corps policy, damage to federally constructed levees that have been completed and officially turned over to a nonfederal sponsor are to be repaired with 100 percent of the cost borne by the federal government and damage to nonfederally constructed levees are to be repaired with 80 percent of the cost borne by the federal government and 20 percent by the local sponsor or government. However, in September 2005, the Corps noted that Hurricane Katrina had caused unprecedented damage and loss of infrastructure in the Gulf Coast region. According to the Corps, damage to the region eroded the tax base to such an extent that local sponsors would have great difficulty funding their share of rebuilding expenses. In response, the Corps requested a one-time waiver from the Assistant Secretary of the Army for Civil Works from the policy requiring local sponsors to fund 20 percent of the cost of rehabilitating nonfederal flood and hurricane protection projects. For federally authorized projects that were under construction when Hurricane Katrina made landfall, the Corps also requested a waiver from the policy requirement that local sponsors fund a share of the repair cost. In October 2005, the Assistant Secretary of the Army for Civil Works approved both requests. In the December 2005 emergency supplemental, Congress appropriated funding to the Corps to repair levees and flood control structures damaged by Hurricane Katrina to the level of protection for which they were designed, at full federal expense. Most earthen levees are constructed with a mixture of clay and sand. The most commonly used method is to build an earthen embankment sloped on both sides and rising to a flat crown (see fig. 2). Depending on local conditions and the availability of suitable materials, levees can be built in one or more stages. The number of stages is generally dependent on the ability of the local soil to provide an adequate base, and not sink under the weight of levees, and to compact and provide suitable strength. When appropriate conditions exist, levees can be built in a single stage. In other cases, levees may need to be built in stages (also called lifts) that allow for subsidence of the foundation soil or settlement of the fill material. Between stages the levees are allowed to settle for up to 5 years. Because the soil in southeastern Louisiana has a tendency of settling, historically most levees built in the New Orleans area were required to be built in three to four stages, and construction took 15 to 20 years. Because of the urgency of the repairs that the Corps made after Hurricane Katrina, earthen levees in the New Orleans area had to be rebuilt in only several months. To do this, the Corps relied on mechanical compaction by heavy construction equipment to compensate for the normal settlement that would occur over time. Building levees quickly can pose risks, however, as was witnessed on May 30, 2006, when a 400-foot section of a reconstructed levee in Plaquemines Parish slipped 3 to 4 feet under its own weight. Corps officials said the underlying soil was weaker than previous tests had indicated and was unable to support the weight of the newly constructed levee. To provide interim protection, the Corps constructed a small earthen berm on top of the levee to return it to approved design height by June 7, 2006. By June 1, 2006, the Corps planned to complete repairs to 169 miles of southeastern Louisiana hurricane protection projects to prestorm conditions—that is, to repair most levees and floodwalls to the condition they were in before Hurricane Katrina. For 128 miles of levees with minor or moderate damage, the Corps planned to repair or fill scour (erosion) and holes. For 41 miles of levees and floodwalls with major damage, or that were completely destroyed, the Corps planned to rebuild these damaged sections entirely, including rebuilding to the original design grade, plus an allowance for settlement. The Corps only planned to repair hurricane-damaged levees and structures and did not plan to repair or replace any existing levees or floodwalls unless exterior damage was observed. The Corps awarded 59 contracts to repair damage in three sections of the city of New Orleans (Orleans East Bank, New Orleans East and the Inner Harbor Navigation Canal, commonly called the Industrial Canal) and the parishes of Plaquemines and St. Bernard. The following sections briefly describe the location and damage caused by Hurricane Katrina for these five areas and the number of contracts the Corps awarded for completing the repairs. Orleans East Bank is located south of Lake Pontchartrain, from the 17th Street Canal to the Inner Harbor Navigation Canal, and along the western bank of the Inner Harbor Navigation Canal to the Mississippi River. About 19 miles of levees and floodwalls are along the Orleans Lakefront, the Inner Harbor Navigation Canal and three drainage canals—17th Street, Orleans Avenue, and London Avenue—which drain rainwater from New Orleans into Lake Pontchartrain (see fig. 3). A total of about one mile of levees and floodwalls were damaged along the 17th Street Canal and two sides of the London Avenue Canal. There was also intermittent minor erosion, and all 13 of the area’s pump stations were damaged. The Corps constructed interim sheet pile walls at the breach sites along the drainage canals and contracted for the construction of permanent T-walls at each of the breach sites. However, the Corps was concerned about the integrity of the canal walls that were not breached during Hurricane Katrina. The Corps chose to construct interim closure structures (gates) where the canals empty into Lake Pontchartrain to reduce storm surge from entering the canals during hurricanes and storms. According to Corps officials, the Corps did not have the authority to construct permanent gates; so, in late January and early February 2006, the Corps awarded contracts for the construction of three interim gates and 34 pumps along the three drainage canals. A total of 12 contracts were awarded for the Orleans East Bank area. The Inner Harbor Navigation Canal is a 5.5 mile long waterway that connects the Mississippi River to Lake Pontchartrain. The east and west sides of the Industrial Canal are lined by a total of 12.3 miles of levees and floodwalls (see fig. 4). A total of 5 miles of levees and floodwalls were damaged by Hurricane Katrina along the Inner Harbor Navigation Canal. Two breaches occurred on the western side of the Inner Harbor Navigation Canal, near the intersection of the Gulf Intracoastal Waterway and the Inner Harbor Navigation Canal, and two separate large breaches occurred on the lower eastern side, resulting in major flooding to New Orleans’ Lower Ninth Ward. The Corps awarded eight contracts to repair and completely rebuild damaged and destroyed levees and floodwalls along the Inner Harbor Navigation Canal. New Orleans East is bounded by the east bank of the Inner Harbor Navigation Canal on the west, Lake Pontchartrain to the north, Bayou Sauvage National Wildlife Refuge to the east, and the Gulf Intracoastal Waterway to the south. The area has 39 miles of exterior levees and floodwalls and eight pump stations (see fig. 5). The hurricane damaged 4.6 miles of levees and floodwalls and all eight pump stations. Ten contracts were awarded to repair this damage. Plaquemines Parish includes long, narrow strips of land on both sides of the Mississippi River between New Orleans and the Gulf of Mexico. The Mississippi River levees protect the parish from floods coming down the river, and the New Orleans to Venice hurricane protection project (portions of which are not yet completed) protects against hurricane- induced tidal surges. The distance between these Gulf-side levees, called back levees, and the Mississippi River levees is less than 1 mile, in most places. Plaquemines Parish has a total of 169 miles of levees and floodwalls and 18 pump stations (see fig. 6). In Plaquemines Parish, a total of 150 miles of levees and floodwalls were damaged along with 18 pump stations. The Corps awarded 20 contracts to repair and rebuild levees and floodwalls damaged by Hurricane Katrina in Plaquemines Parish. According to the Corps, there was considerable erosion scour along the total length of the levees. The Mississippi River levees were also damaged by numerous ships and barges that crashed into them. Five of the 6 miles of floodwalls along the Mississippi River were also destroyed but will be replaced with earthen levees because the Corps determined that the underlying foundation could not support the weight of a concrete floodwall. In St. Bernard Parish, levees and floodwalls extend along the Gulf Intracoastal Waterway to the north, along the Mississippi River Gulf Outlet to the east and south, and then turn west toward the Mississippi River, continuing along the river to the Inner Harbor Navigation Canal along the western side. St. Bernard Parish has 30 miles of exterior levees and floodwalls, 22 miles of nonfederal interior levees, and eight pump stations (see fig. 7). In St. Bernard Parish, 8 miles of exterior levees and floodwalls were damaged, 14 miles of nonfederal interior levees (back levees) were damaged and all eight pump stations and two control structures were damaged. The Corps awarded nine contracts to repair and rebuild the levees, floodwalls, and flood control structures in St. Bernard Parish. Following Hurricane Katrina, several independent review teams began studies to determine the cause of hurricane protection failures in southeastern Louisiana. These teams included the Interagency Performance Evaluation Task Force, Independent Levee Investigation Team sponsored by the National Science Foundation, and the American Society of Civil Engineers External Review Panel. The Interagency Performance Evaluation Task Force and Independent Levee Investigation Team have issued preliminary reports of their findings and conclusions. The American Society of Civil Engineers External Review Panel was assembled to review the Interagency Performance Evaluation Task Force work and conclusions. On June 1, 2006, the Interagency Performance Evaluation Task Force issued a draft final report that concluded that the levees and floodwalls in New Orleans and southeastern Louisiana did not perform as a system and that it was a system in name only. According to the report, the hurricane system’s performance was compromised by the incompleteness of the system, the inconsistency in the levels of protection, and the lack of redundancy. Inconsistent levels of protection were caused by differences in the quality of materials used in the levees and variations in elevations due to subsidence and construction below design specifications. Corps officials said they considered the findings and recommendations of the Interagency Performance Evaluation Task Force when making decisions about how to repair levees and floodwalls damaged by Hurricane Katrina. The Corps has received over $7 billion dollars to restore hurricane protection and complete construction on existing hurricane protection projects in southeastern Louisiana through three emergency supplemental appropriations. In September and December 2005, the Corps received a total of $3.299 billion in the second and third emergency supplemental appropriations. In September 2005, the second emergency supplemental appropriation provided the Corps with $400 million for repair of flood control and hurricane protection projects. In December 2005, the third supplemental appropriation provided the Corps with $2.899 billion, of which $2.3 billion was provided for emergency response to and recovery from coastal storm damages and flooding from hurricanes Katrina and Rita. The Corps has allocated nearly $2.1 billion to the New Orleans District to repair damage to existing hurricane protection, rebuild existing projects to original authorized height, and complete unconstructed portions of previously authorized hurricane protection projects. In turn, the New Orleans District has allocated nearly $1.9 billion for this work. In June 2006, through the fourth emergency supplemental appropriation, the Congress provided almost $4 billion to the Corps to strengthen the region’s hurricane defenses and restore areas of coastal wetlands. The legislation included specific provisions for southeastern Louisiana hurricane protection and flood reduction project enhancements (canal closures, selective levee armoring, and storm proofing pump stations), and incorporating nonfederal levees in Plaquemines Parish into the federal levee system. The June 2006 emergency supplemental also provided general construction funding that the Corps plans to use to, among other things, raise levee heights for certain hurricane protection projects in order to certify them in the National Flood Insurance Program (also called a 100-year flood level of protection). Table 1 summarizes the estimated costs and funds allocated for the Corps’ planned work to date. On June 1, 2006, the Corps reported that 100 percent of prehurricane protection levels had been restored to southeastern Louisiana. However, work continued on almost half of the contracts because some were behind schedule while other contracts were not scheduled to be completed until as late as March 2007. In instances where the Corps determined it could not complete permanent repairs by June 1, 2006, the Corps installed temporary structures or levee supports and developed emergency procedures to protect against flooding in the event of a hurricane. The Corps originally allocated $801 million for this phase of the repairs; however, the current allocation for total costs for this phase is just over $1 billion. To restore 100 percent of prehurricane levels of protection in southeastern Louisiana by the start of the 2006 hurricane season, the Corps worked quickly to award contracts for a variety of work to be performed in a relatively short period of time. Between October 2005 and March 2006, the Corps awarded 59 contracts to repair and rebuild earthen levees, concrete floodwalls, and other hurricane protection structures, and to construct interim repairs in areas where final repairs could not be completed by June 1. To complete repairs quickly, some contractors worked 24 hours a day, and Corps project managers monitored the progress of the work. As of June 1, 2006, the Corps reported that 22.7 miles of new levees and 195 miles of scour repairs were completed. Although the Corps reported that 100 percent of prehurricane levels of protection had been restored by June 1, 2006, as of July 18, 2006, 27 of the 59 contracts were not completed. Of those 27 contracts, the Corps projected that 20 would be completed by September 30, 2006, and the remaining 7 contracts would be completed by March 2007. The remaining work includes grading, compacting, and shaping the levees, as well as grass seeding and fertilizing. In some instances, to restore prehurricane levels of protection, the Corps decided to change the design of the existing hurricane structure. For example, in the Orleans East Bank, the Corps determined that it did not have the time to assess the stability of existing canal walls nor could it complete repairs to all of the breaches along the drainage canals before June 1, 2006. As a result, at a cost of $111 million, the Corps decided to install interim gated closure structures (gates) on all three canals—17th Street, London Avenue, and Orleans Avenue—where they intersect Lake Pontchartrain to prevent storm surge from entering the canals and to install 34 temporary pumps to drain floodwaters from the Orleans East Bank portion of the city (see fig. 8). According to Corps officials, the agency planned to install interim gates and temporary pumps because it did not have the authority to install permanent gates and pumps under its emergency flood control authority. The Corps expects the interim gates and temporary pumps to remain in place for 3 to 5 years, after which the Corps will construct permanent gates and pumps. The 2006 emergency supplemental appropriation provides $530 million for permanent gates and pumps at the three drainage canals. According to the Corps, the interim gates will be operated manually, and the temporary pumps will not be enclosed. If a major storm or hurricane should occur, the Corps plans to close the gates when water levels in the 17th Street and London Avenue canals reach 5 feet and the water level in the Orleans Avenue canal reaches 9 feet. The Corps is reviewing the results of recent soil samples collected in the area and may change its plans, depending on these results, a Corps official said. The temporary pumps being installed by the Corps can only pump out a portion of the drainage water that would normally be pumped into the canals during a storm event. As a result of the restriction being placed on the water levels pumped into the canals and the limited pump capacity of the temporary pumps, the Corps has acknowledged that some flooding could occur from the heavy rainfall that normally occurs during a hurricane. In instances where the Corps did not expect permanent repairs to be completed by June 1, 2006, the Corps devised some interim and temporary solutions to provide the same level of protection that existed before Hurricane Katrina. For example, as of June 1, 2006, construction of one of the three interim gates—the 17th Street canal gate—was behind schedule. The Corps estimated it would be completed by September 15, 2006. If a hurricane threatens before the interim gate is in place, the Corps plans to drive sheet piling in front of the Hammond Highway Bridge that crosses the 17th Street canal to close off the canal from Lake Pontchartrain. On June 12, 2006, the Corps announced that the temporary pumps built for the drainage canals could not provide the required pumping capacity. The Corps plans to procure replacement pumps with different specifications for the 17th Street canal and repair new pumps already installed at the Orleans Avenue and London Avenue canals. Under normal conditions, the Corps said it would have conducted hydraulic modeling and testing to determine the correct pump configuration. The Corps did not perform modeling and testing, officials said, because the process can take months, and there was insufficient time to do so before the start of the hurricane season. If the canals must be closed due to a hurricane, before pumping capacity is restored at the drainage canals, the Corps plans to use a combination of temporary and portable pumps. Similarly, in Plaquemines Parish, the Corps made temporary repairs to 5 miles of levees along the Mississippi River after the Corps concluded that a floodwall located on top of a section of levee was not reliable. The Corps decided to add a temporary reinforcement because there was not enough time to replace 5 miles of floodwalls before the start of the 2006 hurricane season. To provide this interim protection, the Corps added compacted clay along the backside of the damaged levee. The Corps subsequently determined that the foundation soil in this area would be unable to support the weight of floodwalls, so the Corps has decided to construct a full earthen levee embankment instead. However, this permanent structure is not scheduled to be completed until March 2007. The Corps allocated about $801 million to repair levees and floodwalls to pre-Katrina conditions. An additional $217 million was needed to fund the $125 million costs to increase the pumping capacity of the new temporary pumps for the drainage canals and $92 million to fund such things as (1) additional work that has been required on existing repair contracts, relating to weakened levees in Plaquemines parish, the three drainage canal gates, and two hurricane protection and flood reduction projects; (2) contingency measures that had to be implemented until the temporary gates on the drainage canals are completed; and (3) costs to acquire nearby real estate for construction of the gates and associated levees. The Corps allocated these additional funds from the $566 million that was allocated by the Corps for raising all hurricane protection structures to their authorized design elevations, which is discussed in greater detail in the next section of this report. Beyond the repairs that were to be completed by June 1, 2006, the Corps has additional plans to continue repairs, restoration, and construction activities on other portions of the existing five southeastern Louisiana hurricane protection and flood control projects. The Corps plans to (1) repair all damaged pumps, motors, and pump stations by about March 2007; (2) restore sections of the five hurricane protection and flood control projects that have settled over time to their original design elevation; as well as (3) complete construction of previously authorized but incomplete portions of these hurricane protection and flood control projects by September 2007. Although $1.165 billion was originally allocated for this work, the Corps expects actual costs will be greater because the original allocation did not reflect design changes, additional costs to fund the local sponsor’s share, and rapidly escalating construction costs. Further, in June 2006, the Corps shifted $224 million from this allocation to pay for the additional costs to repair damaged levees and floodwalls, leaving only $941 million for this work. The Corps plans to repair pumps and pump motors at 66 of 75 pump stations damaged by flood waters that were caused by Hurricane Katrina. The pump stations are located in Orleans, St. Bernard, and Plaquemines parishes as well as in neighboring Jefferson Parish. Pumps remove storm runoff from city streets. The Corps plans to make electrical and mechanical repairs to pumps and motors—such as rewiring motors and replacing pump bearings—and structural repairs to pump stations, such as repairing roof tops. As of June 2006, the Corps had planned to complete repairs to all of these pumps, pump motors, and pump stations by March 2007, for an estimated cost of $59 million. However, to date, the Corps has allocated $70 million for the pump repairs. A Corps project manager said that five contracts have been awarded for $7.7 million, as of June 2006, and that he expects to award a total of 25 contracts for this work. In April 2006, three pump motors that were flooded during Hurricane Katrina caught fire during a rainstorm and shut down, raising questions about the reliability of other pumps that had also been flooded. The possible failure of pumps due to fires combined with (1) the restrictions placed on the level of water that can be pumped into the canals because of uncertainty about the integrity of the canal floodwalls and (2) the reduced capacity of the temporary pumps to remove water from the canals has led to widely reported concerns about flooding from rainwater during a hurricane. In response to these concerns, the Corps accelerated plans to repair all damaged pumps, motors, and pump stations. A Corps official estimated it would take several weeks to repair each of the larger and older pump motors. The Corps plans to repair pumps and pump motors by taking some of them offline one at a time, thereby maintaining as much of the available pumping capacity at each pumping station as possible. The Corps plans to raise the height of all federal and some nonfederal levees, floodwalls, and other hurricane protection structures within the southeastern Louisiana area, which have settled over the years, to their original design elevation by September 1, 2007. In December 2005, the Corps surveyed levees not damaged by Hurricane Katrina and estimated that about 48 miles of levees were 1 to 2½ feet below design elevation in St. Bernard, Orleans, Plaquemines, and Jefferson parishes. The Corps estimated that restoring these levees to their designed height would cost $50.8 million. However, the Corps allocated $566 million from funds provided in the December 2005 emergency supplemental appropriation to raise not only the heights of these levees but also the heights of floodwalls and other structures in southeastern Louisiana, which may have settled over time, to their original design height. The primary difference between the Corps’ initial cost estimate and the funds allocated in the emergency supplemental is the higher cost of raising floodwalls and other structures, compared with the cost of raising only about 48 miles of levees. In July 2006, the Corps estimated that 94 miles of levees, about 16 miles of floodwalls, 89 gates, and 2 control structures were below design elevation in Orleans, Plaquemines and St. Bernard parishes. According to a Corps official, the agency is revising the plans and estimated costs for this work to include the costs of raising all settled floodwalls and the cost of replacing all I-walls with T-walls or L-walls. As of July 2006, the Corps had not announced the results of its second damage assessment. Currently, this work is still scheduled to be completed by September 1, 2007. As of June 2006, funds allocated for this work were reduced to $342 million because, as previously mentioned, $224 million was shifted to help fund the escalating costs to repair damaged levees and floodwalls to pre-Katrina levels by June 1, 2006, and to fund repairs to hurricane damage at other hurricane protection and coastal protection projects. According to a Corps official, cost estimates for this work were to be available by July 15, 2006, after which the Corps plans to determine if it needs to request more funds. By September 30, 2007, the Corps plans to complete the construction of all previously authorized but incomplete portions of the five hurricane protection and flood reduction projects in southeastern Louisiana. In December 2005, the Corps estimated the cost of completing these five projects to be $529 million. However, the Corps is revising its cost estimates due to escalating construction costs and design changes that have occurred since Hurricane Katrina. The Corps’ costs will also increase because local sponsors are no longer required to share any of the costs incurred to complete these projects. Details of the five projects are described below. The Lake Pontchartrain and Vicinity Hurricane Protection Project is located in St. Bernard, Orleans, Jefferson, and St. Charles parishes in southeastern Louisiana, in the vicinity of the city of New Orleans and between the Mississippi River and Lake Pontchartrain. The project includes a series of control structures, concrete flood walls, and about 125 miles of earthen levees designed to protect residents living between Lake Pontchartrain and the Mississippi River levees from storm surges in the lake (see fig. 9). This project was designed to provide protection from a standard project hurricane (equivalent to a fast-moving Category 3 hurricane). The Flood Control Act of 1965 authorized the project that, at the time of Hurricane Katrina, was 90 percent complete in St. Bernard and Orleans parishes, 70 percent complete in Jefferson Parish, and 60 percent complete in St. Charles Parish. The pre-Katrina scheduled completion date for this project was 2015, at an estimated cost of $738 million, where the estimated federal share was $528 million and the estimated local sponsor share was $210 million. At the time of the storm, estimated costs to complete the remainder of the project were $121 million. This estimate is expected to increase due to higher construction costs following Hurricane Katrina. The West Bank and Vicinity Hurricane Protection Project is located on the west bank of the Mississippi River in the vicinity of the city of New Orleans and in Jefferson, Orleans, and Plaquemines parishes. The project is designed to provide hurricane protection to residents from storm surges from Lakes Cataouatche and Salvador, and waterways leading to the Gulf of Mexico. The project encompasses 66 miles of earthen levees and floodwalls (see fig. 10). This project was designed to provide Category 3 level of hurricane protection. The Water Resources Development Act of 1986 authorized this project. At the time of Hurricane Katrina, the project was 38 percent complete. The pre-Katrina completion date for this project was 2016, at an estimated cost of $331 million, where the federal estimated share was $215 million and the estimated local sponsor share was $116 million. At the time of the storm, estimated costs to complete the remainder of the project were $148 million; however, the Corps expects the final cost to be much higher. The design for this project includes 4 miles of T-walls, and since the cost of T-walls has escalated, officials said they expect the cost to complete the project will increase as well. The Larose to Golden Meadow, Louisiana Hurricane Protection Project is located in southeastern Louisiana, about 30 miles southwest of New Orleans, along Bayou Lafourche and between the communities of Larose and Golden Meadow in Lafourche Parish. The project is a ring-shaped levee about 40 miles in length (see fig. 11). According to Corps officials, this project was designed to provide a 100-year level of hurricane protection to about 2,300 acres of residential and commercial land and 9,400 acres of agricultural land. The Flood Control Act of 1965 authorized this project that, at the time of Hurricane Katrina, was about 96 percent complete. The pre-Katrina completion date of this project was 2007, at an estimated cost of $116 million, where the federal estimated share was $81 million and the estimated local sponsor was $35 million. At the time of the storm, estimated costs to complete the remainder of the project were $4 million. However, according to the project manager, significant settlement has occurred throughout the project and levees are between 1 to 1 ½ feet below design elevation. Further, when this project was designed in the early 1970s, a nearby marsh was expected to help slow storm surge. Since that time, the local environment has changed causing the marsh to disappear and, according to the project manager, the Corps is reconsidering the project design and may have to recommend raising the height of the levees in order to provide authorized levels of protection, which could significantly increase the costs of the project. The Southeast Louisiana Urban Flood Control Project is located on the east bank of the Mississippi River, in Orleans Parish, and on the east and west banks of the Mississippi River, in Jefferson Parish and St. Tammany Parish. The project was designed to provide drainage and flood protection from a 10-year rainfall event and encompasses major drainage lines and canals, additional pumping capacity, and new pump stations (see fig. 12). The project was originally authorized by the Energy and Water Development Appropriations Act, 1996 and the Water Resources Development Act of 1996. At the time of Hurricane Katrina, the project was about 60 percent complete. The pre-Katrina completion date for this project was 2009, at an estimated cost of $908 million, of which the federal estimated share was $678 million and the estimated local sponsor share was $230 million. At the time of the storm, estimated costs to complete the remainder of the project were $225 million (this estimate has been revised to $339 million). According to a Corps official, this estimate will increase further because costs for engineering and construction have escalated in the months following Hurricane Katrina. The New Orleans to Venice Hurricane Protection Project is located along the east bank of the Mississippi River from Phoenix, Louisiana—about 28 miles southeast of New Orleans—down to Bohemia, Louisiana, and along the west bank of the river from St. Jude, Louisiana—about 39 miles southeast of New Orleans—down to the vicinity of Venice, Louisiana. The project was designed to provide protection from hurricane tidal overflow from a 100-year storm and consists of 87 miles of enlarged levees built on the back side of the ring of levees (see fig. 13). This project was authorized under the River and Harbor Act of 1962. At the time of Hurricane Katrina, the project was about 84 percent complete. The pre-Katrina completion date for this project was 2018, at an estimated cost of $253 million, where the federal share was $177 million and the estimated local sponsor share was $76 million. At the time of the storm, estimated costs to complete the remainder of the project were $32 million. According to a Corps official, estimated costs to complete this project are expected to increase due, in part, to design changes. In response to various requirements and directives from stakeholders, the Corps has already developed or is in the process of developing a number of plans and projects that will further restore, construct, and/or enhance hurricane protection for southeastern Louisiana, to make it stronger and better. Constructing these projects may take years and require billions of dollars in federal funds. However, the Corps does not have a comprehensive strategic plan to ensure that all of these efforts are effectively integrated and an implementation plan to ensure funding allocations are made in the most efficient manner possible, avoiding redundancies and misuse of resources. In addition to the repairs and construction activities already described in prior sections of this report, a number of requirements and directives placed on the Corps over the last several months have required it to modify existing plans or develop new plans for hurricane protection in southeastern Louisiana: The 2006 emergency supplemental appropriation provided nearly $4 billion to the Corps to enhance hurricane protection in southeastern Louisiana. Specific provisions provided $530 million for permanent pumps and closures for New Orleans’ three drainage canals; $350 million for two navigable closures to prevent hurricane surge from entering the Inner Harbor Navigation Canal and the Gulf Intracoastal Waterway; $250 million to storm-proof existing interior drainage pump stations in Jefferson and Orleans parishes; $170 million to armor critical sections of New Orleans levees; and $215 million to include nonfederal levees in Plaquemines Parish into the federal system, which means the levees will be repaired and built to Corps standards and eligible for future rehabilitation. These projects are in addition to the other work described in prior sections of this report. The 2006 emergency supplemental also appropriated nearly $1.6 billion to the Corps to reinforce or replace floodwalls in the New Orleans metropolitan area and provided that at least $495 million of the amounts appropriated for construction be used to raise levees for the Lake Pontchartrain and West Bank levee projects to provide a level of protection necessary to satisfy the certification requirements of the National Flood Insurance Program (often referred to as the 100-year flood standard.) In April 2006, the Federal Emergency Management Agency announced the release of new advisory flood elevations for New Orleans and the surrounding area based on a 1 percent annual chance of flooding, or a 100-year flood. The Corps’ restoration plans for hurricane protection did not meet these new elevation requirements. In response, the Corps revised its plans and estimated costs to raise the height of levees and floodwalls to provide the area with a 100-year level of protection. The Corps estimated it would need an additional $4.1 billion to upgrade all of the floodwalls and raise levees to meet the new standard by 2010. The Corps’ estimate included $2.5 billion to raise the height of levees in all of the New Orleans area, except for lower Plaquemines Parish, in some cases by as much as 7 feet, which included $900 million to complete other levee work in the area and upgrade or replace existing I-walls with T-walls. In lower Plaquemines Parish, the estimated cost to replace all I-walls with T- walls is $1.6 billion. As required by the 2006 Energy and Water Development Appropriations Act and Department of Defense Appropriations Act, the Corps is conducting a study of flood control, coastal restoration, and hurricane protection measures for the southeastern Louisiana coastal region. The Corps must propose design and technical requirements to protect the region from a Category 5 hurricane. The two laws appropriated a total of $20 million to the Corps for this study. The Corps was required to provide a preliminary technical report to Congress by June 30, 2006 (which was issued on July 10, 2006) and a final technical report by December 30, 2007. The final study must consider alternative designs to protect against a storm surge produced by a Category 5 hurricane originating from the Gulf of Mexico. According to the Corps, alternatives being considered include a structural design consisting of a contiguous line of earthen or concrete walls along southern coastal Louisiana, a nonstructural alternative involving only environmental or coastal restoration measures, or a combination of those alternatives. The Corps’ July 2006 preliminary technical report did not specifically identify which alternatives the Corps would recommend but instead provided a conceptual framework for both structural and nonstructural components that should be considered in developing long-term solutions for the region. Although the cost to provide a Category 5 level of protection for the southeastern Louisiana coastal region has not yet been determined, it would be in addition to the over $7 billion already provided to the Corps in the three emergency supplemental appropriations discussed in previous sections of this report. Finally, the Corps is responding to the findings and recommendations from the Interagency Performance Evaluation Task Force and its review of the existing hurricane protection and why it failed. For example, the task force reported that overtopping and erosion caused most breaches to levees and floodwalls and recommended armoring to prevent scour from overtopping, thereby reducing the chance of breaching. As discussed above, the 2006 emergency supplemental appropriation provided $170 million to armor critical areas on levees. Although the long-term solutions for southeastern Louisiana have not yet been determined and may not be decided for some time, the Corps is proceeding with over $7 billion of already authorized repair and restoration work without a comprehensive strategy to guide its efforts. Without such a strategy, we believe that the Corps may end up replicating past missteps, which occurred because it was required to follow a piecemeal approach to developing the existing hurricane protection that, according to experts, is not well integrated. For example, the draft final report issued May 2006 by the investigation team sponsored by the National Science Foundation stated (1) that there was a failure to integrate the individual parts of a complex hurricane system, (2) that insufficient attention was given to creating an integrated series of components to create a reliable overall system, and (3) that projects were engineered and constructed in piecemeal fashion to conform to incremental appropriations. In its June 2006 draft final report, the Interagency Performance Evaluation Task Force also concluded that hurricane protection systems should be deliberately designed and built as integrated systems to enhance reliability and provide consistent levels of protection. According to the Corps, the technical report due to the Congress in December 2007 will include the long-range strategy that will provide an integrated and comprehensive review of flood control, coastal restoration, and hurricane and storm damage reduction measures for the southeastern Louisiana region, and the preliminary framework for this strategy is included in the report provided to the Congress on July 10, 2006. However, according to a senior Corps official, there is currently no other strategic plan in place to guide its efforts. We are concerned that the Corps has embarked on a multibillion dollar repair and construction effort in response to the appropriations it has already received, without a guiding strategic plan, and appears to be simply doing whatever it takes to comply with the requirements placed on it by the Congress and other stakeholders. Consequently, we are concerned that the Corps is once again, during this interim period, taking an incremental approach that is based on funding and direction provided through specific appropriations and is at risk of constructing redundant or obsolete structures that may be superseded by future decisions, thereby increasing the overall costs to the federal government for this project. During the past 4 years, we reported that the Corps’ planning for civil works projects were fraught with errors, mistakes, and miscalculations and used invalid assumptions and outdated data. We recommended, and the Corps agreed, that an external peer review of its plans and decisions was needed, especially for high risk and costly proposed projects. In the aftermath of Hurricane Katrina, the Corps established the Interagency Performance Evaluation Task Force and used the task force’s findings and lessons learned to improve its engineering practices and policies to provide hurricane protection. However, the task force is set to dissolve once its final report is released in September 2006, and the Corps has not indicated that it plans to establish another similar body to help guide its interim repair and restoration efforts, monitor progress, or provide expert advice. Following Hurricane Katrina—one of the largest natural disasters in U.S. history—the Army Corps of Engineers rapidly repaired and restored almost 169 miles of damaged levees, floodwalls, and other flood control structures to prehurricane levels of protection in time for the start of the 2006 hurricane season. Now that these urgent repairs have been completed, the Corps is beginning to implement a variety of other plans to make many additional repairs and enhancements to existing southeastern Louisiana hurricane protection projects that may cost billions of dollars and take years to complete. Further, additional enhancements are being considered to increase the overall level of protection for the area to protect against even larger hurricanes that may add many billions of dollars and many years to the scope of the Corps efforts. Currently, the Corps does not know what ultimate level of protection will be authorized for southeastern Louisiana and therefore cannot make strategic decisions about which components of a hurricane protection system will most effectively provide the required level of protection. Nonetheless, the Corps has been appropriated over $7 billion to continue repairs and construction on five existing hurricane protection projects in the area. However, it does not have a comprehensive strategy to guide these efforts and appears to be simply doing whatever it takes to comply with the requirements placed on it by the Congress and other stakeholders. We believe that taking such an incremental and piecemeal approach for such a complex and expensive repair and restoration project is imprudent and that, even for these interim repairs and enhancements, the Corps should be fully considering project interrelationships to avoid unnecessary duplication and redundancy, and to reduce federal costs. We also believe that relying on an independent body like the Interagency Performance Evaluation Task Force to help guide and oversee this process will help ensure that the Corps obtains objective and reliable support as it implements these authorized enhancements to the existing hurricane protection projects. In order to construct a hurricane protection system that provides the appropriate level of protection to southeastern Louisiana and ensures the most efficient use of federal resources, we are making the following two recommendations: The Army Corps of Engineers should develop (1) a comprehensive strategy that includes an integrated approach for all projects and plans for rebuilding and strengthening the system and (2) an implementation plan that will achieve the specific level of protection in a cost-effective manner, within a reasonable time frame. The Army Corps of Engineers should establish an evaluative organization like the Interagency Performance Evaluation Task Force, to assist in its efforts in developing a strategic plan, monitoring progress, and providing expert advice for constructing a stronger and well-integrated hurricane protection system. We provided a draft of this report to the Department of Defense (DOD) for its review and comment. In commenting on a draft of the report, DOD concurred with our first recommendation that the Army Corps of Engineers develop (1) a comprehensive strategy to integrate projects and plans for rebuilding and strengthening hurricane protection and (2) an implementation plan that will provide a specific level of protection in a cost-effective manner within a reasonable time frame. DOD partially concurred with our second recommendation that the Army Corps of Engineers establish an evaluative organization to assist in its efforts to develop a strategic plan, monitor progress, and provide expert advice for constructing a stronger and well-integrated hurricane protection system, because it believes that a body like the Interagency Performance Evaluation Task Force is not the proper mechanism for this work. According to DOD, the Corps will rely on three teams of experts to plan and monitor the construction of a hurricane protection system. First, an independent technical review person or team will identify, explain, and comment on the assumptions underlying the Corps’ economic, engineering, and environmental analyses for each project, and evaluate the soundness of Corps’ models and planning methods. Second, the team currently reviewing flood control, coastal restoration, and hurricane and storm damage reduction measures for the southeastern Louisiana region will assist the Corps in developing a strategic plan for constructing a stronger and well-integrated hurricane protection system. Lastly, the Corps has assembled a Federal Principals Group consisting of senior leaders from federal agencies to guide the development of a comprehensive plan and monitor implementation of the plan. We believe that the Corps’ proposal to use three external groups of experts satisfies the spirit of our recommendation. DOD’s comments are included in appendix I. We are sending copies of this report to the Honorable Donald H. Rumsfeld, Secretary of Defense, and interested congressional committees. We will also provide copies to others on request. In addition, the report will be available, at no charge, on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made major contributions to this report are listed in appendix II. In addition to the contact named above, Edward Zadjura, Assistant Director; John Delicath, James Dishmon, Doreen Feldman, Christine Frye, John Kalmar, Carol Kolarik and Omari Norman made key contributions to this report. | Hurricane Katrina's storm surge and floodwaters breached levees and floodwalls causing billions of dollars of property damage, and more than 1,300 deaths. Under the Comptroller General's authority to conduct reviews on his own initiative, GAO reviewed the Army Corps of Engineers (Corps) (1) progress in repairing damage to hurricane protection projects by June 1, 2006; (2) plans and estimated costs to make other repairs and complete five existing hurricane protection projects; and (3) plans and estimated costs to add enhancements and strengthen hurricane protection for the region. GAO reviewed related laws and regulations, Corps planning documents and repair tracking reports, observed ongoing repair work, and met with key agency officials and other stakeholders. Following Hurricane Katrina, the Corps worked quickly to repair and restore almost 169 miles of damaged levees, floodwalls, and other flood control structures to prehurricane levels of protection. Although the Corps stated that it had restored prehurricane levels of protection to the area by June 1, 2006, it used temporary solutions and developed emergency procedures to protect against flooding, in the event of a hurricane, for sections where permanent repairs could not be completed in time. For example, the Corps constructed interim gates on three canals to prevent storm surges from flooding New Orleans. When construction of one canal gate fell behind schedule and could not be completed by June 1, 2006, the Corps devised an emergency plan to drive sheet piling into the canal and close it off if a hurricane threatened before the gate was completed. More importantly, because these initial repairs were performed only on levees and floodwalls with obvious visual damage, the reliability of those adjacent to them is still unknown. The Corps originally allocated $801 million for initial repairs, but the current allocation has increased to over $1 billion. After completing the initial repairs, the Corps plans to conduct additional repairs and construction on the existing hurricane protection system. These plans include (1) repairing all damaged pumps, motors, and pumping stations by about March 2007; (2) restoring sections of existing hurricane protection projects that have settled over time to their original design elevations; and (3) completing construction of incomplete portions of five previously authorized hurricane and flood control projects by September 2007. An additional $941 million had been allocated for this additional work, but the Corps expects actual costs will be greater because of subsequent decisions to change the design of these projects, cover the local sponsor's share, and because of rapidly escalating construction costs. In addition, the Corps plans to undertake further work to enhance and strengthen the hurricane protection for southeastern Louisiana. These projects are estimated to take years and require billions of dollars to complete. Since September 2005, the Congress has appropriated more than $7 billion for some aspects of this work and additional appropriations are expected. According to an external review organization established by the Corps, hurricane protection systems should be deliberately designed and built as integrated systems to enhance reliability and provide consistent levels of protection. However, the Corps does not have a comprehensive strategy and implementation plan to integrate the repairs already authorized and planned and that would ensure the efficient use of federal funds. Instead, the Corps appears to be following a piecemeal approach, similar to its past practice of building projects without giving sufficient attention to the interrelationships between various elements of those projects or fully considering whether the projects will provide an integrated level of hurricane protection for the area. |
Payroll taxes are the main source of financing for Social Security— which includes OASI and DI—and for the HI program in Medicare— also referred to as Medicare part A. The payroll taxes for these programs are levied on wages and on the net self-employment income of workers under the Federal Insurance Contributions Act (FICA) and the Self-Employment Contributions Act (SECA). Although Social Security is often discussed as a retirement program, Social Security (OASDI) is a social insurance program that provides cash payments to persons or families to replace income lost through retirement, death, or disability. Workers make “contributions” in the form of payroll taxes that are then credited by the Treasury to the Social Security trust funds. Once individuals have worked a sufficient time to qualify, they become eligible for benefits under the program. enrollees (about 25 percent of total annual funding) and appropriations of general funds (about 75 percent of total funding). While both the Social Security OASDI and Medicare HI are overwhelmingly financed by payroll taxes, those trust funds receive some general revenues in the form of income taxes paid on a portion of the Social Security benefits of upper-income retirees. Collection of the payroll taxes that fund OASDI and Medicare HI is administered by IRS. However, because these payroll taxes are earmarked to fund specific retirement, disability, and medical benefits for which workers become eligible through their qualified employment, they are fundamentally different from income taxes, which are imposed on certain segments of the population and which are not earmarked for any specific purpose. HI tax is 2.9 percent, divided evenly between the employee and the employer. Until 1994, the wage base for HI was identical to that for OASDI. Since 1994, however, the HI tax has been imposed on all of a worker’s wages and self-employment earnings. Figure 1 illustrates the flow of payroll taxes into the Social Security and Medicare trust funds. income wage earners replace a larger portion of their earnings than do the payments to higher wage earners. As with retirement benefits, a number of rules apply in determining who is eligible for disability benefits. Generally, a disability is defined as the inability to engage in “substantial gainful activity” by reason of physical or mental impairment. Workers who have become fully qualified for OASI benefits and who become disabled are also generally qualified for disability benefits. Workers who become disabled before becoming fully qualified for OASI benefits may nevertheless qualify for disability benefits under certain circumstances. Payments to disabled individuals, like those to retirees, take into account personal work histories and wages earned. As with retirement benefits, lower wage earners have a larger portion of their wages replaced than do higher wage earners. individual. For certain types of medical services, patients may be required to pay deductibles or additional charges. Under current law, employers withhold OASDI and HI payroll taxes from employees’ pay along with federal and state income taxes, if any. Both the employees’ and the employers’ shares of FICA taxes are deposited—along with other federal taxes—to a designated Federal Reserve bank or other authorized depository. All federal taxes are then deposited in the Treasury. Treasury credits the Social Security and HI trust funds for the applicable amounts. Neither eligibility for benefits nor the amount of benefits is based on the amount of taxes paid by an individual, and neither IRS nor the Social Security Administration (SSA) directly credits to the individual the annual and cumulative FICA taxes paid by or on behalf of each individual. Cumulatively, the OASDI and HI taxes collected represent dedicated receipts. They are accounted for in earmarked funds: the Social Security OASI trust fund, Social Security DI trust fund, and Medicare HI trust fund. These trust funds hold funds in the form of special nonmarketable U.S. Treasury securities that are backed by the full faith and credit of the U.S. government. They are an asset to the trust fund and a legal claim on—or an obligation of—the general fund of the Treasury. When benefits are to be paid, securities sufficient to fund those benefits are redeemed, and benefits are paid by the Treasury. trust funds earn interest on the funds lent to the Treasury. This interest is paid in the form of additional Treasury securities. Until 1983, program revenues and expenses were closely matched, and the reserves were modest. After the 1983 Social Security Commission recommendations were enacted, balances grew. As a result, interest credits have become a more important source of revenue for the OASDI trust funds. As we have reported, both Social Security and Medicare face serious financing challenges. Today, taxes paid into the trust funds exceed benefits paid out. However, as more and more of the “baby boom” generation enters retirement, this will change. The combination of a larger elderly population, increased longevity, and rising health care costs will drive significant increases in health and retirement spending when the “baby boom” generation begins to retire. Over the long term, the trust funds are not solvent. SSA projections show that, absent a change in the structure of the program, the OASDI trust funds will only be able to pay full benefits through 2037. However, as we have reported, because a trust fund’s accumulated balance does not necessarily reflect the full future cost of existing government commitments, it is not an adequate measure of the fund’s solvency or the program’s sustainability. The cash flows for these programs will create pressure on the federal budget long before these so-called trust fund exhaustion dates. securities and pay benefits, the government would have to raise taxes, cut spending for other programs, increase borrowing from the public, or retire less debt (if there is a surplus)—or some combination of these. As the Comptroller General testified last month, our long-term simulations show that, absent a change in the design of Social Security and Medicare, ultimately the government would do little more than mail checks to the elderly and their healthcare providers. The EITC is a refundable tax credit established by Congress in 1975. The credit offsets the impact of Social Security taxes paid by low- income workers and encourages low-income persons to seek work rather than welfare. The EITC is available to taxpayers with and without children and depends on the nature and amount of qualifying income and on the number of children who meet age, relationship, and residency tests. The amount of EITC allowed to an individual is first applied as a payment against any income tax liability of that individual. Any remaining amount is refunded to the individual. Workers can receive the credit as a lump sum payment after filing an income tax return or in advance as part of their paycheck. Table 2 shows, for the past 3 years, the number of EITC recipients, the relatively small number of those who reported receiving an advance EITC, and the total EITC amount. In December 1998, the Council of Economic Advisers concluded that “the EITC is one of our most successful programs for fighting poverty and encouraging work.” Among other things, the report said that the EITC had lifted 4.3 million Americans out of poverty in 1997, had reduced the number of children living in poverty that year by 2.2 million, and had increased the labor force participation of single mothers. For many EITC recipients, the credit is more than enough to fully offset Social Security taxes. Most EITC recipients earn credits that exceed their income tax liabilities. The Joint Committee on Taxation has estimated that 87 percent of the credit earned in 2000 will be refunded as direct payments to taxpayers. For many of the recipients these refunds will be more than enough to offset their payroll tax burdens. For example, a head-of-household filer who has two children and earns $15,000 in wages would have earned an EITC of $3,396 in 2000. This amount would have exceeded her precredit income tax liability of $24 plus her $1,148 portion of payroll tax liability. It would also have been more than enough to offset her employer’s $1,148 share of the payroll tax, which most economists believe to be borne by the employee. However, many low-income individuals and couples, especially those without children, do not earn the EITC. Looking at all low-income taxpayers together, the Congressional Budget Office estimated that in 1999 households with cash incomes between zero and $10,000, on average, received EITC refunds equal to 4.1 percent of their incomes. This average refunded credit was enough to offset the average payroll tax liability of these households, but it would not have completely offset the burden of the employer’s portion of the payroll tax. The average refunded credit for households with cash incomes between $10,000 and $20,000 typically would not have been sufficient to offset any of the employer’s share of the payroll tax and only a portion of the employee’s share for those households. Since 1995, we have identified EITC noncompliance as one of the high-risk areas within IRS because such noncompliance exposes the federal government to billions of dollars of risk through overpayments of the EITC. Although IRS has estimated that billions of dollars have been overpaid to EITC recipients, it has not reported on the portions of noncompliance that may be due to unintentional errors, perhaps attributable at least in part to the complexity of the EITC, or to fraudulent efforts to obtain the credit. In April 1997 and September 2000, respectively, IRS reported on the results of two EITC compliance studies—the first involving tax year 1994 EITC claims accepted by IRS between January 15 and April 21, 1995, and the second involving tax year 1997 claims processed by IRS between January 20 and May 29, 1998. Although changes in IRS’ study methodology as well as legislative changes between 1994 and 1997 made the results of the two studies noncomparable, both studies documented a significant amount of EITC noncompliance. Of $17.2 billion in EITC claimed during the first study period, IRS estimated that $4.4 billion (about 26 percent) was overclaimed. Of $30.3 billion in EITC claimed during the second study period, IRS estimated that $9.3 billion (about 31 percent) was overclaimed. The largest source of taxpayer error identified by IRS in both studies related to EITC requirements that are difficult for IRS to verify— principally those related to eligibility of qualifying children. Currently, to be a qualifying child, a child must (1) be the taxpayer’s son, daughter, adopted child, grandchild, stepchild, or eligible foster child (i.e., meet a relationship test); (2) be under age 19, under age 24 and a full-time student, or any age and permanently and totally disabled (i.e., meet an age test); and (3) have lived with the taxpayer in the United States for more than half the year or for the entire year if an eligible foster child (i.e., meet a residency test). Failure to meet the residency test was the most common qualifying child error identified in both studies. IRS’ studies identified the following as other sources of EITC errors. Complicated living arrangements--when a child meets the rules to be a qualifying child of more than one person, the person with the higher modified adjusted gross income (AGI) is the only one who can claim the EITC using that child. The person with the lower modified AGI cannot use that child to claim the EITC even if the other person does not claim the EITC. This rule does not apply if the other person is the taxpayer's spouse and they file a joint return. Misreporting of filing status—these errors involved married taxpayers filing as single or head of household when they should have filed as married filing separately. Persons who file as married filing separately are not eligible to claim the EITC. Income misreporting—these errors included misreporting of earned income and underreporting of investment income. EITC “noncompliance” as identified in IRS’ studies and as referred to in this testimony includes errors caused by mistakes—possibly due to the complexity of the EITC—or an intent to defraud. Both of these potential sources of error have been of concern to IRS and others. Some analysts consider the EITC to be a complex tax provision that challenges those applying for it to properly understand and follow the qualifying rules. On the other hand, the credit’s possible susceptibility to fraud has also been a concern to Congress and IRS for many years. Although being able to differentiate between these different causes may be important in identifying appropriate corrective measures, IRS’ primary goal in conducting its compliance studies was to identify the level of overall EITC noncompliance. Determining the causes of overpayments is more challenging and costly, especially determining whether an EITC claim is fraudulent, which requires knowing the difficult-to- prove intent behind the taxpayer’s actions. IRS’ reports on its two compliance studies did not discuss the extent to which EITC overclaims were due to mistakes versus fraud. However, as we discussed in a July 1998 report on IRS’ first study, IRS examiners and case reviewers did make a determination of intent for almost every case involving an overclaim. Based on those determinations, about one-half of the returns with an EITC overclaim and two-thirds of the total amount overclaimed were considered to be the result of intentional errors. Because these assessments were judgmental and made without any specific criteria, they were considered too imprecise to be included in IRS’ report. However, as we said in 1998, the results did indicate that IRS’ compliance efforts should include activities aimed at taxpayers who intentionally misclaim the EITC. Concerned about the level of EITC noncompliance, Congress and IRS have taken various steps to reduce it. After the 1994 compliance study, Congress took the following steps: According to law, an EITC is not to be allowed unless the tax return contains the EITC-qualifying-child’s Social Security number (SSN) as well as the SSNs of the taxpayer and the taxpayer’s spouse, if any. Before 1997, if IRS identified a return with an invalid SSN, it had to resolve that issue through its normal audit procedures. Because those procedures are resource intensive, IRS was not able to follow up on most of the invalid SSNs identified. In 1995, for example, IRS stopped the refunds on about 3 million returns with invalid SSNs. However, IRS was only able to follow up with taxpayers on about 700,000 of those returns. For the other 2.3 million returns, IRS released the refunds without any follow-up. In 1996, Congress authorized IRS to treat invalid SSNs as “math errors,” similar to the way that IRS had historically handled computational mistakes. With that authority, IRS has been able to (1) automatically disallow any EITC claim associated with an invalid SSN and (2) make appropriate adjustments to any refund that the taxpayer might be claiming. collect SSNs of birth parents and provide IRS with information linking the parents’ and child’s SSNs. Congress began providing IRS with appropriated funds (about $143 million a year) for a 5-year EITC compliance initiative beginning in fiscal year 1998. As part of the 5-year compliance initiative and using the tools provided by Congress, IRS implemented a plan that calls for reducing EITC noncompliance through expanded customer service and public outreach, strengthened enforcement, and enhanced research. In implementing its plan, IRS has taken several actions, with some significant results. For example: In 1999 and 2000, IRS identified a total of about 3.4 million “math errors” related to the EITC, about 24 percent of which involved invalid SSNs.According to IRS, it denied about $675 million in erroneous EITC claims during fiscal years 1999 and 2000 because of EITC-related “math errors.” Other types of EITC noncompliance are not as easy to identify as invalid SSNs. These types of noncompliance can be detected only through an in- depth review. For the past few years, IRS has targeted for in-depth review certain types of EITC claims, such as those involving the use of a child’s SSN on multiple returns for the same year, that IRS had identified as important sources of noncompliance. Returns identified by IRS were to be audited to determine if the EITC claims were valid. During fiscal years 1999 and 2000, according to IRS, it completed more than 500,000 of these audits and identified about $800 million in overclaims. about $435,000 for 143 of those preparers. We do not know how, if at all, IRS’ visits resulted in improved due diligence by preparers. That question may be addressed in IRS’ report on the results of its visits, which, according to IRS, will be issued about May 1. IRS implemented a program to enforce the recertification requirements of the Taxpayer Relief Act of 1997. According to IRS data, (1) about 312,000 taxpayers were required to recertify after being denied the EITC for tax year 1997 and (2) about 193,000 of those taxpayer did not claim the EITC on their tax year 1998 returns. IRS sees these results as an indication that recertification has reduced the number of improper claims. IRS expanded its EITC outreach and educational efforts. For example, it developed partnerships with groups that are advocates for low-income taxpayers and with businesses and large employers who include EITC information in monthly billings or employees’ pay statements. IRS also refocused its media campaign and publications toward educating the public about EITC eligibility requirements. IRS developed a database that can be used to help verify the accuracy of taxpayers’ claimed dependents and EIC-qualifying children. It incorporates data from an assortment of sources including the HHS and SSA information provided for in the 1997 Act. According to IRS, the database is used to screen returns during processing for potential compliance issues and to select for pre-refund audits those with the highest potential. Also, according to IRS, the returns being selected are primarily ones filed by EITC claimants. Despite these initiatives, it remains to be seen how, if at all, Congress’ and IRS’ efforts have succeeded in reducing the 31- percent EITC overclaim rate identified by IRS’ tax year 1997 EITC compliance study. IRS is doing a study of tax year 1999 returns and plans to study tax year 2001 returns. The results of those studies, when compared to the results of the tax year 1997 study, should provide a basis for assessing the impact on overall EITC noncompliance. Although well-designed and effectively-implemented processes should help reduce EITC noncompliance, certain features of the EITC represent a trade-off between compliance and other desired goals. Unlike income transfer programs, such as Temporary Assistance for Needy Families and Food Stamps, the EITC was designed to be administered through the tax system. Accordingly, while other income transfer programs have staff who review documents and other evidence before judging applicants to be qualified to receive assistance, the EITC relies more directly on the self-reported qualifications of individuals. This approach generally should result in lower administrative costs and possibly higher participation rates for the EITC than the other assistance programs. However, EITC noncompliance may also be higher. This is especially true when eligibility depends on information that cannot be readily and rapidly verified by IRS as it processes tax returns. EITC eligibility, particularly related to qualifying children, is difficult for IRS to verify through its traditional enforcement procedures, such as matching return data to third-party information reports. Correctly applying the residency test, for example, often involves understanding complex living arrangements and child custody issues. Thoroughly verifying qualifying child eligibility basically requires IRS to audit individual tax returns, as was done in the tax year 1994 compliance study—a costly, time-consuming, and intrusive proposition. - - - - - I appreciate this opportunity to appear today to provide a basic description of the payroll taxes funding Social Security and Medicare hospital insurance and to discuss what is known about EITC noncompliance. Mr. Chairman, that concludes my prepared statement. I would be happy to answer any questions you or other Members of the Committee might have. contributions to this testimony included David Attianese, Kenneth Bombara, Christine Bonham, Barbara Bovbjerg, Carol Henn, Susan Irving, Deborah Junod, and John Lesser. Long-Term Budget Issues: Moving From Balancing the Budget to Balancing Fiscal Risk (GAO-01-385T, Feb. 6, 2001). Federal Trust and Other Earmarked Funds: Answers to Frequently Asked Questions (GAO-01-199SP, January 2001). Medicare Reform: Issues Associated With General Revenue Financing (GAO/T-AIMD-00-126, Mar. 27, 2000). Medicare Reform: Leading Proposals Lay Groundwork, While Design Decisions Lie Ahead (GAO/T-HEHS/AIMD-00-103, Feb. 24, 2000). Social Security: Evaluating Reform Proposals (GAO/AIMD/HEHS- 00-29, Nov. 4, 1999). Social Security Reform: Implementation Issues for Individual Accounts (GAO/HEHS-99-122, June 18, 1999). Social Security: Different Approaches for Addressing Program Solvency (GAO/HEHS-98-33, July 22, 1998). Tax Administration: Assessment of IRS’ 2000 Tax Filing Season (GAO-01-158, Dec. 22, 2000). Earned Income Credit: IRS’ Tax Year 1994 Compliance Study and Recent Efforts to Reduce Noncompliance (GAO/GGD-98-150, July 28, 1998). Tax Administration: Earned Income Credit Noncompliance (GAO/T- GGD-97-105, May 8, 1997). | This testimony discusses (1) how payroll taxes fund Social Security and the Medicare Hospital Insurance (HI) programs and (2) noncompliance associated with the Earned Income Tax Credit (EITC) and efforts to deal with that noncompliance. Payroll taxes fund the Social Security Program and the Medicare HI program. These taxes are paid in equal portions by employees and their employers. Employees and their families become eligible to collect these benefits once workers have been employed for a sufficient period of time. Although Social Security benefits are calculated using a formula that considers lifetime earnings, HI benefits are based on the health of the covered individual and are paid directly to the health care provider. Demographic trends indicate that these programs will impose an increasing burden on the federal budget and the overall economy. Regarding EITC, significant compliance problems can expose the Internal Revenue Service (IRS) to billions of dollars in overpayments. EITC noncompliance is identified as taxpayer errors and intent to defraud. IRS and Congress have taken several steps to reduce noncompliance, including the passage of laws that enabled IRS to disallow EITC claims with invalid social security numbers and the implementation of a five-year EITC compliance initiative. |
Hanford’s aging underground tanks contain about 54 million gallons of highly radioactive waste. DOE currently estimates the total cost of cleaning up the tank waste at more than $50 billion (in actual year dollars). To convert the waste into a form for more permanent storage, the waste will be separated into high-level and low-activity components and then, through a process called vitrification, converted into a glass-like material that can be poured into steel containers where it will harden. The immobilized high-level waste will be stored on-site for eventual shipment to a national repository, while the low-activity waste will be permanently disposed of on the Hanford Site. DOE envisioned that two contractors would build and operate demonstration facilities that would initially treat at least 6 percent of the waste. DOE referred to this part of the waste treatment effort as phase I. DOE estimated that phase I would last at least until 2007 and cost about $3.2 billion and another $1.1 billion in contract support costs, for a total of about $4.3 billion. In September 1996, DOE awarded a fixed-price contract for $27 million to each of the two contractor teams to begin phase I by developing preliminary facility designs and other preliminary project plans. One team was led by BNFL and the other team was led by Lockheed Martin Advanced Environmental Systems (Lockheed). In phase II, contractors would compete for a contract to process the remaining tank waste. DOE’s experience during the initial part of phase I led to a change in the contracting approach. In May 1998, after reviewing the preliminary designs and plans submitted by the two competing teams, DOE decided to continue phase I with only one contractor—BNFL. On August 24, 1998, DOE signed a fixed-price contract with BNFL for $6.9 billion to continue with phase I. DOE estimated that its other costs related to supporting BNFL’s efforts would be about $2 billion, bringing the project’s total estimated cost to about $8.9 billion. DOE’s August 1998 contract with BNFL is a substantial departure from DOE’s original privatization strategy. According to DOE, changes to its initial approach were made to optimize the technical approach and to make the project financially feasible or to reduce the likelihood of performance failure. These changes fall into four main areas: competition, financial issues, facility issues, and schedule revisions. Unlike DOE’s original approach, the project no longer includes competition between contractors. DOE and outside expert reviewers found that the approach set forth by the Lockheed team presented an unacceptably high technical risk in attaining DOE’s cleanup goals. In contrast, DOE concluded that BNFL’s technical approach was sound, using technologies for waste treatment and vitrification that were well developed and had been used in other waste treatment situations. Therefore, DOE authorized only BNFL to proceed through the remainder of phase I. The extent to which competition will be present in phase II is unknown. DOE’s approach to financing the project has shifted from requiring the contractor to obtain all needed financing to a strategy of agreeing to repay BNFL’s debts above its equity, insurance, and other limited funds if BNFL defaults on its loans and DOE terminates the contract. DOE officials said that the government’s commitment to repay the contractor’s debt was needed, in large part, to make the project financially feasible. Government backing of the private debt is an unusual feature for a fixed-price contract because the government normally does not agree to pay a contractor’s debt as an allowable cost. Another change was that neither contractor was willing to commit to a fixed-unit price and schedule without adding significant contingency to the price of the contract. The August 1998 contract identified a target price and set August 2000 as the date at which the unit price will be fixed and BNFL’s funding commitments will be established. The original proposal included temporary facilities that were estimated to have a useful life of approximately 10 years. According to DOE, however, both BNFL and Lockheed concluded that shorter-term facilities were not feasible and that more permanent facilities were needed to provide the required levels of safety, operability, and maintainability. The contract now requires the waste treatment facilities to be designed to operate for a minimum of 30 years and have the capability to increase capacity. DOE said that although this approach means much more expensive facilities than originally anticipated and, therefore, an increase in project costs for phase I, the more permanent and expandable facilities allow DOE more flexibility and options in how the waste cleanup is completed. In addition to more permanent, costly facilities, the new contract extends the design period and delays the start of construction about 19 months beyond what was originally planned. Both BNFL and Lockheed indicated that additional time was needed to further develop the project’s design and plans for meeting regulatory and permitting requirements. The contractors believed that adhering to the original schedule would carry too many uncertainties and that they would be unable to obtain needed financing for the project unless a more realistic schedule could be negotiated. The current schedule and cost estimates for the project are substantially greater than DOE’s original estimates. In 1996, DOE estimated that in the first phase of the project, two contractors would process 6 percent of the waste by 2007 and up to 13 percent of the waste by 2011. DOE is now estimating that the first phase will last until at least 2017 and 10 percent of the waste will be processed. Design activities have been extended by 2 years, construction will take about 4 years longer, and the time to process the waste increased from 5 years to about 10 years. Estimated costs for the project have also increased significantly. The total project costs for phase I, including DOE’s support costs, increased from $4.3 billion in the original estimate to $8.9 billion in the current estimate (in constant fiscal year 1997 dollars). The waste processing facilities now being designed will cost nearly $1 billion more to build than the demonstration facilities DOE originally proposed. Because of the longer period during which investors will expect a return on investments, equity and debt financing costs are expected to increase from about $1 billion to more than $3 billion. And, the average cost to process waste will double from $760,000 per metric ton to $1.5 million per metric ton. Despite the dramatic increase in estimated costs for this project, in July 1998, DOE estimated that its revised approach for phase I would provide savings of 26 to 36 percent when compared with two alternatives—a management and operations (M&O) contract or a cost-reimbursement contract with performance-based incentives. The savings estimate of 36 percent was based on comparing the proposed BNFL fixed-price approach with an M&O approach based on past Hanford management and operating contractor cost data; the estimate of 26 percent was based on a comparison with the estimated cost for BNFL to perform the work under a cost-reimbursement contract. However, our review of DOE’s most recent estimates indicate that the savings amounts should be viewed with considerable caution. Specifically, Comparing its revised approach to a M&O contracting approach is not meaningful because DOE would no longer seriously consider using such an approach. DOE’s cost savings analysis could be more meaningful if it included a range of contracting and financing options such as various combinations of government and private financing. For the contract alternatives DOE considered in its analysis, the margin of error was plus or minus 40 percent, meaning that the actual cost could be up to 40 percent less than or greater than the estimate presented. Because the order of magnitude estimates are subject to so much variability, it is difficult to assign much credence to this overall savings estimate. Cost growth estimates were not used consistently. For the comparison between a fixed-price contract and a cost-reimbursement contract with performance incentives, DOE assumed that cost growth would be 68 percent for the cost-reimbursement contract, and the fixed-price contract would have no cost growth. However, other evidence indicates that fixed-price contracts may have greater cost growth than cost-reimbursement contracts. Specifically, a DOE funded study found that fixed-price contracts had greater cost growth than cost-reimbursement contracts. Under the revised contract approach, DOE faces a substantial financial risk that could be in the billions of dollars. This risk comes mainly in the form of an agreement to pay BNFL for much of the debt incurred in constructing and operating the waste treatment facilities if BNFL defaults on its loan payments and DOE terminates the contract. This agreement has the same practical effect as a loan guarantee and is a dramatic departure from the original privatization strategy. If DOE had provided a guarantee for BNFL’s loans from a private lender, the Federal Credit Reform Act would have required DOE to estimate the net present value of the subsidy cost of the loan guarantee over the term of the loan and to have budget authority available for this full cost before the guarantee could be provided. The amount of DOE’s potential liability is unknown, because the amount of borrowing that will be covered under the commitment will likely not be determined until the contract price is established in August 2000. However, BNFL’s vice president and project manager told us that DOE’s potential liability could be as much as $3 billion. He said that in the case of a default, $3 billion is about the maximum debt that would be outstanding after BNFL’s equity and contingency funds were applied. DOE’s financial risks hinge on a number of factors that could potentially affect the project. We identified six main types of factors, which we believe merit continued attention as the project proceeds. BNFL officials acknowledge that although the technology they plan to use has been successfully applied in other settings, it has been tested only on small amounts of Hanford waste in laboratories and has not been used at production facilities to vitrify the unique types of waste at Hanford. Under DOE’s original approach, the success of the selected technologies was to be demonstrated in temporary plants; in DOE’s revised approach, permanent plants will be built. BNFL has developed various other approaches to deal with the need to ensure that the technology will work. These include conducting tests on certain aspects of the technology at existing facilities at other DOE sites and in the United Kingdom and constructing a prototype melter for the low-activity waste vitrification process. DOE expects to hire experts to review BNFL’s demonstration plans and testing results. Under its revised approach, DOE retains a significant part of the risk for the success of this technology. In the worst case, if demonstration activities fail or prove inadequate to ensure the success of full-scale operations, the overall project may fail, and DOE will be liable for paying off a significant portion of BNFL’s debt after BNFL’s resources are exhausted. If demonstration activities show that the technology is usable but flawed, treatment facilities may require expensive retrofitting to make them viable. This could raise the cost of the fixed-price contract that DOE will negotiate with BNFL. Although the revised approach gives BNFL additional time to design the waste treatment and vitrification facilities, the schedule still poses some potential risk. To give BNFL more time to design the facilities, DOE set back the start of construction by about 2 years. However, even with this change, construction will begin well before all of the design work is completed. BNFL officials estimate that overall design work will be less than 50 percent complete at the start of construction and acknowledged that conducting simultaneous design, construction, and technology testing carries some risk. To reduce this risk, BNFL is performing a periodic risk assessment to ensure that design and technology testing concerns will be addressed as quickly as possible in the next 24 months. Another factor potentially affecting the success of the project—and therefore DOE’s financial risk—is whether the safety and other regulatory requirements can be successfully met. For example, DOE’s Regulatory Unit raised 90 issues with safety documents that BNFL submitted in January 1998. The manager of the Regulatory Unit described the quality of the BNFL safety documents as poor and said that the next set of safety documents, submitted in August 1998, was also poorly done. Unless the required safety documentation is approved, BNFL will be unable to start construction on schedule. The BNFL project manager attributed the safety documentation problems primarily to the early level of the project’s design and said that BNFL will greatly increase the staff addressing safety-related issues during the rest of phase I. BNFL also has recently hired an experienced nuclear facilities licensing manager to lead this effort. DOE has also taken steps to help ensure that BNFL is addressing safety issues. For example, DOE has negotiated into the contract provisions that (1) require periodic meetings between its regulatory staff and BNFL to discuss safety issues and (2) provide for DOE’s attendance at BNFL’s safety committee facility design review meetings. The project also presents another regulatory challenge. DOE planned to have the Occupational Safety and Health Administration (OSHA) regulate worker safety at the plant. However, in May 1998, OSHA declined to assume responsibility, citing a need first for statutory and regulatory changes to be in place, as well as a full complement of the resources required. If OSHA does not regulate worker safety, then DOE must do so. The manager of DOE’s Regulatory Unit said that if this issue is not resolved by January 2000, his unit will assume responsibility for regulating worker safety so that construction can begin on schedule. DOE is responsible for the following major support activities: sampling and analyzing tank waste (characterization); providing infrastructure, which includes roads, water, electricity, and wastewater treatment; retrieving waste, which requires DOE to retrieve waste from the tanks and deliver it to BNFL while keeping the chemical makeup of the waste within specified ranges; and storing and disposing of waste after processing, which requires DOE to temporarily store the high-level waste and permanently store low-activity waste. DOE estimates that support activities will cost about $2 billion, including about $600 million for waste retrieval, $40 million for characterization, and about $370 million for waste storage and disposal. Although support activities are essential to project success, many of them are still in the planning stages and potential problems are not yet apparent. At this time, the areas that appear to be most prone to problems are waste retrieval and waste storage and disposal. DOE’s site support contractor concluded that these two problems have a high risk of adversely affecting the project. As a result, DOE could have to make idle facility payments. In response, the site support contractor identified a set of mitigating actions that it believes will reduce the risk that the problems will adversely affect the project. DOE’s ability to fund the project within its own budget is an important factor in ensuring that lack of funding does not lead to project termination. DOE estimates that it will need more than $10 billion in actual year dollars from fiscal year 1999 through 2017 to fund the $6.9 billion project cost—an average of $537 million annually. This funding represents a substantially increased need for funding at the Hanford Site, where current annual budgets for all on-site cleanup activities total about $1 billion. If DOE could not provide funding for the privatization project when needed, the contract would likely be terminated, triggering DOE’s liability to pay BNFL for the amounts borrowed against the company’s assets. DOE officials said they did not yet have a detailed funding plan for how they would find the additional funding within their budget. However, assuming no significant increase from the Congress, DOE indicated that a major source of funds would likely be funding made available when other DOE sites, such as Rocky Flats and Fernald, are cleaned up and closed. Given DOE’s track record in completing environmental cleanup projects, however, we are concerned that the funds may not be available when they are needed. Another issue that could potentially affect DOE’s ability to ensure that sufficient funding is available for the project relates to how the new contracting approach is classified in the budget. Because of budget limitations contained in the Budget Enforcement Act, cost estimates are prepared for programs, including projects in DOE’s privatization program, to ensure that the limitations are not exceeded. If a federal agency offered a federal government guarantee to a private lender for a contractor’s debt financing, the agency would have to estimate the subsidy cost of the loan guarantee. This is a complex process and is based on the risk of a default or nonpayment of the loans and other factors. The agency would then need the budget authority for the full net present value of the subsidy cost before it could make the guarantee. Although the tank waste project is not structured as an explicit loan guarantee, there is an increase in the government’s potential liability associated with making BNFL’s loans an allowable contract cost. Neither DOE nor the Office of Management and Budget has estimated this potential cost. This is of consequence because it affects how much funding DOE will have to have on hand for the project, and when. In an effort to balance risks and realize cost savings, DOE selected a fixed-price contracting approach for the project. Federal acquisition regulation guidelines note that fixed-price contracting works best when the possibility is low for changes with cost and schedule implications. However, the BNFL contract cites at least 15 events, such as regulatory changes or failure to provide waste on a timely basis, that could cause cost or schedule increases. The consequence of such changes is that they would constitute a potential basis for adjusting the fixed price or paying agreed-upon additional amounts. Federal guidelines state that another factor contributing to the successful use of fixed-price contracting is competition, which helps determine a price that minimizes the cost to the government while providing a fair profit to the contractor. DOE’s revised approach removes competition as a check on price. Instead, DOE has required BNFL to provide certified cost or pricing information for use in evaluating BNFL’s basis for its proposed fixed unit prices. Without competition, however, DOE may not have the same assurance of obtaining the best value for the negotiated price. Managing this large, complex project presents a significant challenge to DOE. The agency’s continuing challenge will be to translate the plans it has made into sustainable oversight efforts that are capable of overcoming problems that have plagued many past waste cleanup projects. DOE has had difficulty managing other large projects. Our past reviews have shown a consistent pattern of poor management and oversight by DOE. For example, in our 1996 report on DOE’s major system acquisition projects (generally projects costing $100 million or more), we reported that at least half of the ongoing projects and most of the completed ones had cost overruns and/or schedule slippage. Some of the reasons for cost overruns and schedule slippage included inadequate project oversight and insufficient attention to technical, institutional, and management issues. In addition, our reviews of individual DOE cleanup projects such as the Defense Waste Processing Facility at Savannah River, the Pit 9 cleanup at Idaho Falls, and the Spent Fuel Storage Project at Hanford all identified problems with DOE’s oversight activities as factors contributing to project difficulties. At least in part to respond to these past difficulties, DOE has developed several systems and processes to manage the tank waste project at Hanford and has subjected its plans to outside review. Despite these efforts, however, outstanding issues concerning technical staff, site support activities, and project administration may keep DOE from being fully prepared to oversee the project. Technical staff: DOE has established a team eventually expected to number about 80 technical and managerial staff to oversee the project. As of August 31, 1998, there were about 30 vacancies, including key staff such as the Deputy Project Manager and five of nine DOE staff in the contract management group. DOE’s Director of Contract Reform and Privatization said that the Hanford unit does not have all of the technical skills necessary to ensure success in overseeing the project. He was especially concerned about the shortage of contract expertise related to administering fixed-price contracts. According to DOE’s contracting officer at Hanford, none of the current DOE staff are experts in fixed-priced contracting. DOE officials plan to hire these and other needed staff during fiscal year 1999. Site support activities: Also critical to the project’s success will be the support that site contractors must provide in preparing infrastructure improvements, retrieving waste, and removing and storing the containers of vitrified material. Outside reviewers commissioned by DOE and the contractor managing the Hanford site have concluded that the support could be provided if adequate funding was forthcoming. However, DOE and tank farm officials said that the project is funded at about $23 million less than needed for fiscal year 1999. DOE has requested full funding for fiscal year 2000, but the budget has not yet been finalized. According to the Director of the Waste Disposal Division, not fully funding support activities in the next couple of years could delay the project. Project administration: Our past work on other DOE projects indicates that carefully administering the contract may also be critical to ensuring that DOE and the contractor work together effectively. DOE has paid considerable attention to developing an approach to overseeing BNFL’s operations and among other things has followed a systems engineering process that involved developing 23 “interface control documents” for those areas such as infrastructure, emergency response, and permitting where DOE or the site contractor have interrelationships with the BNFL contract and specified in the contract that BNFL must deliver completed test reports to DOE for numerous activities, such as validation of chemical processes, qualification of proposed products, and effectiveness of a nonradioactive pilot melter. The potential problem is not with DOE’s efforts to date but with its willingness to fully implement the oversight plans it has developed for the project. Our work over several years and on a variety of DOE activities has disclosed a consistent pattern of failure on the part of DOE to fully implement the plans that it develops. For example, in 1997 we reportedthat two projects at the Fernald, Ohio, site had weaknesses, including insufficient DOE oversight of the contractor, inadequate testing of the technology, and delays in completing planning documents. These problems contributed to a $65 million cost overrun and almost 6 years of schedule slippage. More recently, in a review of DOE’s management of contaminated soils above the groundwater at Hanford, we found that although DOE drafted a management plan by 1994, it never implemented the plan. Four years later, after admitting that the tank waste has leaked into the groundwater, DOE has still not implemented a comprehensive management strategy. Mr. Chairman, in the report we are releasing today, we recommended that DOE take immediate action to fully implement the project’s management and oversight plan, and we suggested to the Congress that an additional review of the project at the end of the extended design phase would be appropriate given the many uncertainties and decisions that remain. Thank you, Mr. Chairman and Members of the Subcommittee. That concludes our testimony. We would be pleased to respond to any questions that you may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed the challenges facing the Department of Energy (DOE) in cleaning up the waste in the 177 underground storage tanks at Hanford, Washington, focusing on: (1) how DOE's current approach has changed from its original privatization strategy; (2) how this change has affected the project's schedule, cost, and estimated savings over conventional DOE approaches; (3) what risks DOE is now assuming with this change in approach; and (4) what steps DOE is taking to carry out its responsibilities for overseeing the project. GAO noted that: (1) the project as currently envisioned is substantially different from DOE's 1996 initial privatization strategy; (2) the most significant changes include eliminating further competition between contractors, building permanent facilities that could operate for 30 years or more instead of temporary facilities, and extending by 2 years the design phase and the dates for completing project financing arrangements and agreeing on the final contract price; (3) the revised approach extends the completion date for processing the first portion of the waste from 2007 to 2017, and total costs rise from $4.3 billion to $8.9 billion, including $2 billion in DOE's support costs; (4) the increased costs are mainly the result of DOE's decision to build permanent facilities that will take longer and cost more to design and build and the higher financing costs and contractor profits involved in operating these facilities over a longer period of time; (5) DOE estimates that this approach has the potential to save 26 to 36 percent over the contracting approaches it has used in the past; (6) the revised approach represents a dramatic departure from DOE's original privatization strategy of shifting most financial risk to the contractor; (7) the contract now calls for DOE to pay BNFL, Inc. for most of the debt incurred in building and operating the facility if BNFL should default on its loans; (8) DOE agreed to assume this risk because it did not think BNFL would be able to obtain affordable financing unless the government provided some assurance that the loans would be repaid; (9) DOE's financial risks are significant because the project has a number of technical uncertainties such as using waste treatment technology that has yet to be successfully tested at production levels on Hanford's complex and unique wastes, and other management challenges; (10) in an attempt to avoid repeating past mistakes in managing large projects, DOE has identified additional expertise it needs and has developed several management tools to strengthen its oversight of the project; (11) the success of the project, however, will depend heavily on how well DOE implements these plans; and (12) DOE has a history of not fully implementing its management and oversight plans, and there are some early indications on this project that DOE may be having difficulty ensuring that the proper expertise is in place and fully funding project support activities. |
In this report, the term “university” includes nonacademic entities such as the university administration, student union, alumni association, athletic departments, and bookstore. These entities may or may not be autonomous (see fig. 1). Student unions are the center of college community life, serving students, faculty, staff, alumni, and guests, and therefore are often the focus of credit card marketing. Alumni associations provide a fund-raising link to graduates, offer financial services to alumni and students, and therefore can be a source of credit card customers. As taxpayer support for universities has diminished relative to other sources of income, universities have sought to raise funds by increasing tuition and fees and becoming more market oriented. Some universities have sought increased revenues through contracts with private companies (e.g., sale of space for advertising at athletic arenas) and increased alumni donations. The credit card industry is a major provider of financial services and a multibillion-dollar industry. According to the American Bankers Association, in the second quarter of 1998 companies that issued Visa and MasterCard credit cards had 335 million accounts, including 186 million active accounts with balances totaling $401 billion. The top-10 credit card issuers held 75 percent of total bank credit card receivables. The preferred marketing technique for potential customers was direct mail— with 3.54 billion pieces of mail sent in 1999—but card issuers also used techniques such as “tabling” on university and college campuses. In addition, some card issuers pursued “affinity relationships” with nonfinancial organizations and institutions, including universities. These relationships often result in a credit card bearing the business or institutional logo and payments from the card issuers based on the number of cards issued, the charges made to the cards, or both. Some credit card issuers are engaged in the practice of extending credit to borrowers who are at a higher risk of default than traditional customers. These issuers are lending to borrowers who are attempting to establish or expand their credit history. Many college students—mostly those who are young, are not employed or have limited employment income, and have no credit history—fall into this category. Bank regulators have noted that these lending activities can present a greater-than-normal risk for financial institutions and deposit insurance funds. Customers, including college students, with limited or no credit history and income will be charged a higher interest rate to compensate for the higher risk of repayment. Banks issuing credit cards are subject to oversight by federal bank regulators to ensure compliance with federal laws and regulations. Federal Reserve staff told us that credit cards issued to college students had not been the focus of bank examinations because they tended to examine the risk of the credit card portfolio as a whole and do not examine subgroups of card holders—especially at banks where the credit card portfolio is a minor portion of their financial business. These officials said that college student credit card portfolios have not been viewed as especially risky, even at banks whose primary business was issuing credit cards. Office of the Comptroller of the Currency (OCC) officials told us that although they have not focused bank examinations on credit cards issued to college students, they do monitor and examine an issuer’s various credit card portfolios—including a review of marketing and acquisition channels, underwriting, and other risk management functions. The portfolio segment of college students typically represented a small portion of the overall portfolio and OCC is not likely to spend additional time on the college student segment. OCC officials told us that if card issuer management reports provided to OCC examiners showed that the college student segment was a significant portion of the credit card portfolio and was growing rapidly or experiencing performance weakness, OCC would devote more resources to a review of the college student segment. Bank regulators review banks’ compliance with laws relevant to credit cards, including regulations governing credit card disclosure and advertising. The Truth in Lending Act, among other things, requires card issuers to disclose key terms and costs in solicitations and applications to open credit and charge card accounts, when an account is opened, and in billing statements. Required disclosures include the periodic rate of interest that will be applied to account balances—expressed as an annual percentage rate—and an itemization of any other finance charges. Special requirements also apply to credit advertisements. The Federal Reserve’s Regulation Z implements the Truth in Lending Act. The information we reviewed revealed consistent views of the advantages and disadvantages associated with using credit cards. For those students who manage their credit responsibly, credit cards provide access to credit and payment conveniences. For those college students who do not manage credit responsibly and have trouble repaying debt, the disadvantages of credit cards can outweigh the advantages, and their credit card debt may be costly and difficult to repay. Card issuers have used lower credit limits and other techniques on a per card basis to constrain the amount of debt that college students can accumulate. The information we reviewed indicated that college students want their own credit cards, both for convenience and to establish a credit history.The conveniences that credit cards offer students include the following: “Cashless” transactions, An interest-free loan from the time of purchase until the payment is due, Cash advances from automated teller machines, The ability to shop by telephone and on-line and make hotel reservations, The chance to purchase items that students might not have the cash to An instant source of credit that is available without filling out forms or undergoing credit checks. Several individuals we interviewed noted that credit cards provide some financial security for students. Unlike cash, a lost or stolen credit card can be replaced; and there are liability limits for fraudulent or unauthorized charges. Credit cards also offer resources in case of emergencies, such as a large car repair bill or airfare home during a family crisis. Some parents approve of their college students having credit cards because they see them as a tool for learning financial responsibility. Some student group representatives and representatives of credit card issuers cited free gifts or bonuses associated with obtaining a card and continued credit card use as advantages to card ownership. Finally, some issuers pointed out that monthly statements can serve as a financial record for students and their families. Gifts or awards associated with credit cards marketed to college students include cash rebates, magazine subscriptions, coupons reducing the price of airplane tickets, discounts or free telephone calls, points toward consumer products, and rebates for a car. For some students, the disadvantages of having a credit card may outweigh the advantages. Some consumer group representatives, debt counselors, and university officials told us that students may not understand the consequences of incurring excessive debt and making payments late. The convenience of credit cards may tempt students to live beyond their means. Consumer and credit counseling groups pointed out that excessive credit card debt and late payments can impair a cardholder’s credit rating and make it more difficult and costly to obtain credit in the future. Credit card issuers emphasize this same point in information they make available to students. Many of these sources also noted that students who pay only the minimum balance each month may not understand the cumulative effect of interest rates. For example, a college student with a credit card loan of $2,000 and an interest rate of 19 percent who pays back the loan at $40 per month will incur interest charges of $1,994 by the time the loan is paid in full. At this rate, it would take 100 months, or over 8 years, to pay back the loan (table 1). Bankruptcy reform legislation that is currently pending before Congress would require such an example to be included in credit billing statements, but at this time no such disclosure is required. There was also general agreement that students may find credit card debt and other debts harder to repay upon graduation than they had anticipated. Parents of college students may or may not have the financial resources to help these students reduce or eliminate credit card and other debt. Some parents may have the resources to help but choose not to provide financial assistance with debt because they want their college student to learn a difficult lesson about financial responsibility. According to the College Board, the average undergraduate with student loans graduated owing $19,400 in 1998 to 1999. College officials and debt counselors also told us that students may overestimate their starting salaries and underestimate their living costs after graduation. According to a 1998 study of college students and credit cards, the potential accumulation of high interest payments on large amounts of credit card debt increases when four or more credit cards are owned, average credit card balances are greater than $1,000, balances are carried over each month, and tuition and fees are charged. At the extreme, excessive credit card debt combined with other financial problems can lead to personal bankruptcy, according to one credit counseling organization. We were unable to determine the number of college students filing for bankruptcy. U.S. Department of Education officials told us that they did not track the number of college students filing for bankruptcy nor did they know of any other organization or study that reported this information. Officials of the Administrative Office of the U.S. Courts and the Executive Office for U.S. Trustees, which have responsibilities regarding bankruptcies, told us that their officials do not collect data on occupational status, including whether someone is attending college. They also told us that although those filing for bankruptcy are asked to report their age and that age information, along with much other information reported by bankruptcy applicants, is not systematically analyzed. American Bankruptcy Institute officials told us that they did not know of studies that tracked the college attendance or age of individuals filing for bankruptcy. We did identify some unpublished academic research that included data on age but not student status. The researchers collected demographic data, including age, from bankruptcy applicants in 1999 and during previous years. Based on their data collection effort using a questionnaire completed by 1,974 individual debtors filing for bankruptcy during the first quarter of 1999 in eight federal judicial districts around the country, the proportion of debtors in bankruptcy for selected age groups during 1999 are displayed in figure 2. Fewer Americans under 25 filed for bankruptcy in 1999 than those between ages 25 and 34 but more filed for bankruptcy than those age 65 and older. The growth rate of bankruptcy filings for people under 25 was greater than the growth rate for ages between 25 and 34 but less than that for people in age ranges 35 and older (see fig. 3). This data does not indicate how many individuals under 25 were college students nor does it indicate what, if any, contribution credit card debt made to these bankruptcy filings. Nonbusiness bankruptcy filings have declined somewhat in the last two years from about 1.4 million in 1998, to about 1.3 million in 1999, to about 1.2 million in 2000 according to the American Bankruptcy Institute. We identified three studies that provided some data on how college students acquire and use credit cards and pay credit card debt. Two of the studies—a survey sponsored jointly by The Education Resources Institution and Institute for Higher Education Policy (TERI/IHEP), and a survey by the firm Student Monitor—used similar methodologies and generated similar findings. The third study was conducted by Nellie Mae, a Sallie Mae subsidiary that provides loans for higher education. This study covered only a small group of students applying for a particular type of loan, and its findings differed from those of the other reports, which covered a broader and more typical population of college students. The Nellie Mae study showed more students owning credit cards and a higher average level of credit card debt. All three studies had generally sound methodologies but with some limitations: the TERI/IHEP and Student Monitor studies relied on self-reporting and were subject to nonresponse from sampled students, and the Nellie Mae study covered only a small pool of students who were trying to get a particular type of loan. The TERI/IHEP and Student Monitor surveys drew statistically valid samples that were representative of a broad college student population in the United States. The TERI/IHEP study, published in June 1998, was a telephone survey of a random sample of 750 college students drawn from a commercially available list. The Student Monitor study conducted in spring 2000, was based on in-person interviews with 1,200 randomly selected college students from 100 universities around the country. The schools were selected to provide a representative sampling based on location, type of higher education institution (public or private), and enrollment. Figure 4 compares results of the two studies in key areas. These two studies had an important limitation: they were based on information reported by the students themselves and were not designed to verify that information. Some researchers maintain that respondents sometimes underreport the quantity or level of characteristics that could be considered unflattering. Despite this and other limitations (such as a reliance on memory and nonresponse of part of the sample), these two studies provide the best data currently available for a broad population of college students. Appendix I contains more information about the methodology and findings of the studies, and appendix III describes other studies we identified on college students and credit cards but which are not discussed in the body of the report because of more pronounced methodological limitations. The TERI/IHEP and Student Monitor studies found that nearly two-thirds of all college students had at least one credit card in their name (fig. 5). Between 6 and 13 percent of college students had four or more credit cards. According to the Student Monitor study, more than half of the students reported credit limits of $1,001 to $5,000, and the TERI/IHEP study reported that 24 percent of students had total combined credit limits of more than $5,000 (fig. 6). Figure 6 depicts higher credit limits for the majority of students surveyed in the TERI/IHEP study (a combined total of 51 percent reporting credit limits of $2,001 or more compared with 30 percent of students surveyed by Student Monitor). The difference may be explained by the difference in the samples used in each study. The TERI/IHEP sample of students included 11 percent who were graduate or professional school students. Because these students are likely to be older, they may have the resources to qualify for higher credit limits. The TERI/IHEP study also included 29 percent who were at 2-year schools. More of those students may have been working full time and have had the resources to qualify for higher credit limits. Survey results indicated that college students got their credit cards from a variety of sources. According to the Student Monitor study, 36 percent of students obtained their cards by responding to mail offers, 15 percent by filling out an application from a display on campus, and 14 percent by applying at a bank. Smaller percentages came from tabling and off-campus displays (6 percent each); telephone solicitation (4 percent); and 800 telephone numbers, internet advertising, and applications placed in a college bookstore bag or college publication (8 percent combined). The TERI/IHEP study reported that 37 percent of college students got their first credit card through a mailing, 36 percent through an application at a business, 24 percent from an on-campus representative or advertisement, and 3 percent from other sources. This study also reported that 63 percent of the students obtained their first credit card by applying on their own. Another 18 percent reported that their first credit card was obtained from their parents; 14 percent said it was sent in the mail, and 4 percent received a first card by other methods. Many of the students responding to both the Student Monitor and the TERI/IHEP surveys had credit cards as freshmen. Almost half of those responding reported getting a bank credit card during their freshman year, but a sizable minority said they already had credit cards when they entered college. According to the Student Monitor Study, 46 percent of college students obtained credit cards during their freshman year, 20 percent after high school but before college, and 14 percent in high school. Fourteen percent acquired a credit card during their sophomore year of college and 5 percent after their sophomore year of college. Among the students surveyed by TERI/IHEP, 55 percent reported receiving credit cards in their first year of college. Another 25 percent said they got their first credit card in high school, while 10 percent received theirs as sophomores and 10 percent after the sophomore year. The two surveys showed that college students used their credit cards for a broad range of items. Students responding to the TERI/IHEP study said that the most common items for which they used credit cards were routine personal expenses such as food, clothing, and entertainment (77 percent); occasional and emergency expenses (67 percent); and books and school supplies (57 percent). Only 12 percent used credit cards to pay tuition and fees, and just 7 percent used them for room and board. Of the students who did charge their tuition and fees, over half (57 percent) paid the charges in full right away. Of survey respondents with credit cards, 44 percent said that credit cards were used for living expenses, 24 percent said they were used for large occasional purchases or health care, and 22 percent said they were used for education related expenses such as tuition, fees, books, and supplies. Student Monitor asked students how they typically paid for certain goods and services. Of students who purchased airline tickets, 61 percent of the students surveyed reported paying for airline tickets with credit cards. Thirty-three percent said they used credit cards to pay for car repairs, and 21 percent said they paid tuition with credit cards. College students charged an average of $127 a month in 2000 according to Student Monitor. Four credit card issuers provided us with data that showed the items college students charge most frequently (fig. 7). Their data show that the top categories of spending for the most recent 12-month period available were gasoline and other service station goods and services; mail order, telephone, and Internet charges; and food, clothing, and other retail expenses. Two card issuers noted that the spending patterns of their college student customers were similar to nonstudents of a similar age or their general customers, but two other issuers reported that “education” as a spending category was the fourth most frequent spending category. One card issuer noted that data on the types of charges came from the stores where the items were bought and the charges were often not broken down into specific items. For example, department store charges could represent clothing, cosmetics, or household items, while university bookstore charges could include books, clothing, or athletic supplies. Most of the students who responded to the two surveys said that they paid their own monthly credit card bills and that they paid their balance in full each month. Eighty-six percent of the students interviewed for the TERI/IHEP study said they paid their own bills. Eighty-three percent of students with a card in their own name reported paying their own credit card bill, according to Student Monitor. Fifty-nine and 58 percent of the students surveyed in the studies reported that they paid their monthly bill in full. Eighty-two percent of the respondents who carried a balance said they typically paid more than the minimum amount due according to the TERI/IHEP study. According to the Student Monitor study, the reported average monthly balance of the 42 percent who carried debt was $577, and 16 percent of those carrying a balance from month to month were running a balance of more than $1,000 (fig. 8). The TERI/IHEP study did not report an average monthly balance but did report balances according to dollar ranges (fig. 9). The Nellie Mae study, published in December 2000, differs from the other two studies in its scope, methodology, and findings. The study covers only a subset of college students who applied for a particular loan product and was not projectable to a national college student population. Nellie Mae drew a random sample of 256 undergraduates from its nationwide group of 1,065 students who applied for private loans for educational expenses early in 2000. These students either did not qualify for federal student loans or had already received the maximum amount available to them. The methodology is unique among the three reports (i.e., the study relies on information from credit bureaus and not on information provided by the students themselves). Credit bureaus receive information for customers, including credit card issuers, banks, and other entities that extend credit. The Nellie Mae study reported that 78 percent of students in the sample had credit cards; the average number was three cards per student. The percentage of college students with four or more cards (32 percent) was higher than in the TERI/IHEP survey. In general, the Nellie Mae study reported higher levels of debt than the TERI/IHEP and Student Monitor studies. Nellie Mae reported an average credit card debt for those with a balance of $2,748. Thirteen percent of the students in its sample carried credit card balances of $3,000 to $7,000, and 9 percent had balances of more than $7,000. There are two possible reasons for the differences in the average level of credit card debt reported in the TERI/IHEP and Student Monitor studies, and the Nellie Mae study. First, students in the first two studies could have underreported their credit card debt. Second, because the students in the Nellie Mae study were drawn from a small pool of loan applicants, they were not representative of the college student population as a whole. The universities we visited took different approaches to on-campus solicitation by credit card issuers. Some universities had campuswide policies that affected all organizational components, while others allowed nonacademic entities—student unions, bookstores, athletic departments, and alumni associations, for instance—to set their own policies. Only 1 of the 12 universities we visited prohibited credit card solicitation altogether, and just 2 others (both state universities) had relatively strict prohibitions, based in part on state laws. At these two universities, commercial vendors were either prohibited from soliciting on campus or allowed to distribute but not collect credit card applications. The remaining nine universities allowed each university entity to set its own policies. At most of the universities we visited, tabling at student unions and aggressive marketing by vendors hired by credit card issuers created the most controversy. Most of the bookstores we visited were run by national corporations or operated independently of the university and tended to adhere to their own policies. While only a few of the athletic departments were involved in credit card solicitation, alumni associations often established relationships with credit card issuers to raise funds. Partly in response to criticism of university involvement with credit card solicitations, most of the universities we visited offered nonacademic instruction in personal finance. Figure 10 shows credit card marketing efforts and other characteristics of the universities visited. In addition, some credit card companies made changes in how they provide disclosure information and some adopted standards for campus solicitation. Complaints about the marketing practices of credit card vendors at student unions have influenced universities’ policies on solicitation. Student union administrators from some universities we visited cited marketing incentives (in the form of free gifts) as the most frequent source of complaints. These concerns led three universities to prohibit the use of such incentives with credit card applications. One student union administrator complained that the vendors created a “carnival atmosphere” with loud music and games, noting that “the incentives, along with the party atmosphere, masked the responsibilities of owning a credit card,” especially since there was no discussion of the consequences of misusing a credit card. Two officials from a state student association feared that using incentives could lead to potential abuses. One stated that credit card vendors pressured students to sign up for free gifts and that the students would then reveal personal information for gifts, such as a squeeze ball. Instances of aggressive solicitation and the presence of many credit card solicitors in student unions also generated controversy at some universities, leading to more restrictive solicitation policies. Credit card companies pay the vendors according to the number of completed applications secured from students. The vendors we contacted declined to provide us information about how much they are paid for completed applications. Officials at several universities said students had a variety of complaints. For instance, students complained that vendors created a “hawking atmosphere,” were “out of control,” and were often “in face.” An official said that some college students complained that vendors followed them after they had refused the credit card application. Some of the universities we visited had tailored their solicitation policies to address these concerns, and some had imposed stricter limits. At one university, students voted to ban credit card vending in the student union altogether. Other universities restricted tabling to specific days or increased the fees for vending. One university limited tabling to three times per week and required that the tables be staffed only by students, effectively ending credit card solicitation at the student union. One credit card vendor told us that they reimbursed student groups based either on an hourly rate, a flat fee, or a fee based on the number of completed applications. A different credit card vendor told us that they pay student groups between $25 and $200 a day to table credit cards as well as $1 to $5 for each completed application. Complaints that credit card marketing efforts were not adequate or helpful in teaching responsible credit card use also affected solicitation policies at some universities. As noted previously, federal law requires written disclosure of key terms when credit is applied for and extended. Officials and students at several of the universities we visited complained that when soliciting credit card on campus, credit card vendors did not discuss or bring to the attention of students key credit card terms such as available interest rates or penalties that are in written disclosure documents. They also said credit card vendors did not provide information on the consequences of nonpayment. For example, an official of a state student association said students are not told about possible consequences, such as the impacts of a bad credit record. In response to such complaints, two universities among those we visited began requiring credit card vendors to hand out additional credit education information along with credit card applications and three began offering debt education presentations. These universities had both centralized and decentralized policies regarding solicitation. Some policies responded to the ideological views and financial needs of student groups. A student union official at one university told us that the student culture was against commercialism and critical of corporate sponsorships. The university’s three student unions had taken this viewpoint into account in banning commercial solicitation, including credit card solicitation. But other universities chose to consider the financial needs of student groups in formulating their solicitation policies. For example, student unions at five universities allowed student groups that relied on funding from credit cards to sponsor credit card vendors; these were the only vendors allowed to solicit. At one of the universities we visited credit card vendors paid $4,359 to five Greek organizations, and one other student organization, over the course of 3 academic years with one Greek organization receiving $2,370 in payments for credit card solicitation. Some credit card issuers noted that they had responded to concerns about aggressive marketing in two ways: by supplementing disclosure information and by creating a code of conduct for on-campus marketers. Issuers told us that they provide disclosure information to college students both when soliciting and when credit is extended. They include disclosure information on the application or in a separate handout applicants can keep for reference. Several issuers told us that they have the same disclosure guidelines for students and nonstudents. Several card-issuing financial institutions, as well as MasterCard and Visa, developed the code of conduct for on-campus credit card solicitation in 2000 (appendix IV). The code, which applies to tabling companies and their representatives (vendors), aims to promote both responsible marketing practices on college campuses and responsible credit card use by students. An official with MasterCard International told us that as of March 2001, six of the largest credit card issuers had adopted the code of conduct. Two tabling companies specializing in college marketing told us that they also adhere to the code of conduct. The tabling companies also had procedures in place for responding to complaints about their representatives, including referring complaints directly to the issuer and retraining or terminating vendors. One tabling official said that the majority of credit card issuers they work with used quality control checks that included inspecting booths and applications and surveying applicants by telephone. Nine of the university bookstores we visited either operated independently of the university or were managed by national corporations. Seven of these bookstores did not receive operating funds from the university and eight had developed their own solicitation policies. Some bookstore managers told us they must find sources of revenue to help cover costs. Several bookstores allowed tabling and other forms of solicitation, including countertop brochures and applications in textbooks and shopping bags. The stores were rewarded for each credit card application they submitted to the issuer or received credit against advertising costs. Applications inserted in shopping bags often helped reduce the cost of the bags. One corporation developed their solicitation policies with input from store managers and school officials. Bookstore officials told us that that tabling was a more limited activity than other forms of solicitation, including placing applications in shopping bags or displaying countertop brochures. Some bookstores had exclusive arrangements with one credit card company that was allowed to table on certain days. One bookstore (owned by the university but independently operated) required credit card vendors to provide consumer education information during tabling events. Another adhered to a university policy banning free gifts as incentives for applying for a credit card. Athletic directors at several of the universities we visited told us that athletic departments engaged in fund-raising activities to help support athletic scholarships and programs. But only two athletic departments had credit card relationships. Some athletic departments engaged in more extensive fund-raising activities than others, particularly some departments at universities classified as Division I in the National Collegiate Athletic Association. These departments in some cases had separate arrangements with corporate sponsors and credit card banks and allowed spot announcements, signage, and credit card tabling at sporting events. In contrast, one athletic department at a Division III university we visited was relatively small and did not rely on credit cards or private sources for funding. Some athletic departments had contracts with issuers that allowed tabling, and two Division I universities had affinity relationships with credit card issuers. One official at a Division I university that had an affinity relationship said that the revenue the cards generated was only a small part of the department’s overall budget and went directly into its general fund. Alumni associations at most of the universities we visited sought additional revenue sources through relationships with credit card issuers. Eleven of the 12 associations we spoke with had affinity relationships with credit card issuers that generated substantial income. Alumni association officials told us that credit card issuers offered a flat fee or a lump sum plus royalties for completed credit card applications from association members or a percentage of the total charges made on affinity credit cards. Officials further told us that the income the associations received from the credit card issuers provided significant support for both the associations’ programs and the universities. Income from the affinity agreements was used to cover such things as the associations’ budgets, university operating costs, scholarships and mentoring, and long-term projects, such as the construction of new buildings. According to alumni association officials, their contracts with the credit card issuers precluded disclosure of the terms and condition of the agreement including information on payments made to the alumni association. In general the alumni associations determined how the companies could market the cards. Most of the associations permitted credit card issuers to solicit members through mailings and telephone calls. Still others allowed solicitation in the form of tabling at sporting events, alumni gatherings and other special occasions, or at the student union. While tabling was permitted under some agreements, several alumni association officials mentioned that credit card issuers were no longer tabling. The associations also decided which members could be the focus of marketing. Generally this group included the alumni themselves and sometimes student members of the associations. For example, officials from four of the seven alumni associations with student members told us that their associations did not permit soliciting of students. The other two alumni associations permitted solicitation of student members, either by mail or by mail and telephone. Several of the alumni associations told us that they had the right to approve the marketing language used in marketing materials. According to alumni officials, few students held affinity credit cards. Officials from five alumni associations told us that students were a small percentage of their alumni affinity cardholders (1 to 10 percent of the total credit card holders). Some universities had responded to the increase in student credit card use and on-campus solicitation by offering students financial education and counseling. Ten of the 12 universities we visited provided some form of financial education instruction, and some had credit counseling services or referred students to outside services (fig. 11). Financial education instruction was often part of freshmen orientation programs. The head of the collections department at one university, for instance, told us that the orientation program included a discussion on budgeting and the responsible use of credit cards. She explained that the university saw its efforts to balance credit card tabling with debt education as a way to help keep students out of debt. Another university covered credit card use in its summer orientation for the same reason. Two universities, both with decentralized policies regarding solicitation of students, had bankruptcy attorneys available to offer advice or information to students on their credit rights, and one had a presentation at the beginning of each academic year, provided students information on credit card use and the potential for financial trouble. All of the universities we visited that provided financial education instruction had voluntary programs, but some officials felt that this instruction should be mandatory given that many parents had not taught their children how to manage money. We asked officials at the three universities that provided advice or information from bankruptcy attorneys or credit counseling services if they had any statistics on the extent to which students using these services had problems with credit card debt and how many had filed for bankruptcy. While all three universities did not have data on these issues a bankruptcy attorney representing one of the universities did. At one university with an undergraduate enrollment of about 10,000, the student association retained an attorney to provide general and financial advice to students. The attorney, who specialized in bankruptcy issues, stated that credit card debt was a primary concern of students seeking his advice. According to the attorney, over the 3 years since April 1998, approximately 1,328 students had utilized the legal service and of this number, 255 students had sought advice on credit card debt issues. The credit card debt of these students ranged from about $2,100 to nearly $39,000, with an average of approximately $11,200. The attorney told us that the younger college students tended to have less debt than the college students who are older than age 23. He said that about half the college students he sees are over age 23 and that the individuals with 6 or 7 credit cards and the highest levels of credit card debt come from this subgroup. The attorney stated that in some cases, after paying tuition, students had used any excess financial aid to pay their credit card debt. Further, during the last 3 years, 83 students using the legal service had filed for bankruptcy. We asked officials at the 12 universities we visited whether or not they collected information on why students leave their universities prior to graduation, the extent to which this information identified whether credit card debt was an explanatory factor, and the opinions of university officials on whether credit card debt was a factor in college withdrawal. Officials at five of 12 universities we visited collected information on student withdrawals but they did not specifically ask the students to report whether credit card debt was a factor in their decision to leave. Officials from three of these universities told us that credit card debt was not generally cited by students as a reason for their decision to withdraw. Even so, officials from four universities, including two of the universities that did and two that did not collect student withdrawal information, told us that they thought students would not report credit card debt as a reason for deciding to withdraw unless the university specifically asked. Nevertheless, 7 of the 12 universities we contacted cited financial concerns, including credit card debt, as possible reasons why students decided to leave. Officials from 4 of the 12 universities stated that they did not sense that credit card debt was a major factor in a student’s decision to withdraw. Officials from 9 of the 12 universities offered a variety of other reasons why students decide to withdraw. Among these reasons were the need to work more hours, family medical problems or health, homesickness, cultural concerns, academic difficulties, career changes, marriage, divorce, and pregnancy. Our review of academic research related to students leaving college indicated that financial factors are some of the many factors that college students and researchers cite as reasons for leaving college prior to graduation. We did not find evidence that the research examined the extent to which credit card debt was a contributory factor to students leaving college. One researcher said that financial considerations appear to be but one part of a complex decisionmaking process, one that depends in large measure upon the nature of the student’s social and intellectual experiences within the college—especially the daily interactions between students and faculty both inside and outside the classroom. We surveyed 10 credit card issuers, 6 of which responded, and talked with industry officials. In our survey and discussions with credit card issuers, we found that issuers had a variety of business practices directed toward college students. Some issuers wanted to market to college students because most college students have some income and lower living expenses compared to non-students. College graduates were also attractive because they had higher earning potential than nonstudents, and students continue to use their cards after college. Issuers told us that they had several methods of marketing to college students, including direct mail, the Internet, and on-campus displays. Most of the issuers that marketed to students said they customized their underwriting standards for college students. For example, one issuer told us that the college a student attended was more important than whether or not the student was employed. Interest rates on the credit cards offered to college students were tied to the prime rate, students’ credit ratings, or other factors. Half of the issuers we contacted said that they charged college students the same late fees as other customers. They said credit limits were smaller overall and were adjusted according to factors such as year in college and whether or not the student had a checking or savings account at the card issuer’s bank. Card issuers also said they tried to help students who were delinquent in their payments by providing counseling or referrals to credit counseling services. They also told us they developed credit information materials and supported financial literacy and debt counseling organizations. Appendix V lists the questions we asked the issuers. Card issuers market to college students because most have some income. According to the Student Monitor study, 55 percent of students said they worked part time, and 9 percent said they worked full time, in 2000. The students reported their mean annual earnings at around $4,550. For the approximately 58 percent of college students who said they received money from home each month to help meet their expenses, the average amount received from home was about $300. Students reported that they had an average of $195 available for discretionary purchases each month.Bachelor’s degree recipients earn 75 percent more on average than those with only high school diplomas and over a lifetime the gap in earning potential between a high school diploma and a Bachelor’s degree or higher exceeds $1 million, according to the College Board. Two issuers told us that they marketed to students because of the long-term profitability of the college student market. One of these issuers noted that credit cards issued to college students were not as profitable as those issued to nonstudents; but once the students graduated, their cards became more profitable than nonstudents’ accounts. This issuer, who had affinity relationships with sports teams, professional groups, and cause related groups, told us that their college student accounts accounted for 15 percent of their affinity cardholders. Some credit card issuer’s marketing was directed at college students through affinity relationships. Universities or their components received funds from the card issuer—either a flat fee or an amount based on factors such as the number of cards issued or monthly charges to the cards. One card issuer that sought affinity card relationships told us that the company marketed to college and universities ranked highly on academic competitiveness measures and that alumni of the top-rated schools managed their credit responsibly. The issuer added that most of the company’s affinity relationships were with universities that allowed the company to use all marketing channels. Card issuers used a variety of methods to market to students, including direct mail, tabling, relationship banking, the Internet, and displays on college campuses known as “take-ones” (fig. 12). Direct mail was a method of marketing to college students for five of the six issuers who responded to our questions about marketing practices. One of these four issuers told us that direct mail and telesales accounted for more than three-quarters of their college student accounts. On-campus tabling was the most visible marketing method, and three of the issuers used tabling on and off campus including at athletic events. Two of the issuers used their branch banks as the primary method of marketing to students, offering credit card applications to students who opened checking accounts and received automated teller cards. All the issuers allowed college students to apply for credit cards through their Internet sites, and students could also apply for the credit cards of the financial institution members of Visa and MasterCard through the Visa and MasterCard Web sites. Only one of the issuers told us that they had an 800 telephone number that students could call to apply for a credit card. Four issuers told us that they customized their underwriting standards for college students, eliminating standard income and employment requirements. The first issuer told us that the company had a unique experiential scorecard for the college market with no income or employment requirement. Extensive experience with college students enabled the company to predict good credit performance based on selective marketing, a credit bureau evaluation, and careful management of the account once the card was issued. This issuer’s college student accounts compared favorably with traditional accounts. The second issuer had two sets of criteria for college students, one for students with credit histories and another for those with no credit records. College students with credit files were judged on the basis of ability, stability, and willingness to repay. Applications from students who did not have credit files were judged according to their source—that is, whether they came from a university that had an affinity relationship with the card issuer. This issuer told us that the company rejected most applications from college students. Reasons for denial included that the college students already had credit available, had histories of delinquency, or had too little income. The third issuer told us that the company had a specialized scorecard for college students that took into account limited employment history and other factors that set students apart from the nonstudent population.Employment history, salary, credit reports, credit need, and ability to pay were important elements in the credit decision; the company also considered year in college and grade point average. This card issuer said that underwriting standards for other customers with characteristics similar to those of college students (e.g., customers with little credit experience) varied only in terms of the importance placed on income and credit history. The fourth issuer told us that the company’s underwriting standards required that college students be enrolled in a 2- or 4-year college or a graduate institution; be 18 years of age; be a U.S. citizen; have a minimum monthly discretionary income of at least $200 after rent, tuition and food are paid for; pass scorecard approval criteria; and not have an existing credit card account at that bank. For this card issuer, employment was not a requirement, but an existing credit history and a demonstrated ability to pay debts were. Again, income and credit histories were more important factors for nonstudents. This issuer told us that its college student portfolio was typically a low-risk portfolio, because most applications came from the company’s banking centers rather than from on-campus marketing efforts. Of the two remaining credit card issuers that responded to our request for information about their underwriting practices, one declined to provide information, except to tell us that the risk adjusted performance of their student portfolio was comparable to new credit customers. The remaining issuer told us that its underwriting process was no different for college students than it was for any other customer. This company said that it used all available relevant information to create the most accurate risk assessment possible and accepted only applicants that were judged to be good risks. The terms and conditions some credit card issuers applied to college student credit cards differed from the terms and conditions companies offered their other customers (fig. 13). Two card issuers, however, treated college students the same as other “new-to-credit” customers. Most card issuers told us that they charged college students a variety of interest rates, depending on the prime rate, credit experience, and other factors. One issuer charged college students interest rates based on the prime rate plus additional interest of between 6.9 and 10.9 percent. Another issuer charged students interest rates ranging from 13.9 percent to 19.8 percent, depending on credit experience. One issuer charged students different interest rates depending on the source of the application (for instance, mail, Internet, or campus tabling), whether or not the student had a credit history, and the type of card issued. Another issuer charged a flat rate of 15.99 percent. Most card issuers told us that the interest rates they charged varied across customers, including college students. One issuer told us that the range of interest rates for student credit cards was wider than the rates charged for customers with established credit histories and pristine payment records. Another issuer told us that the margin they added to the prime rate for college students was between 6.9 and 10.9 percent and that this range for nonstudents varied, depending on the type of credit card the college student had. For example, platinum, gold, and classic cards had a range of 2.9 percent to 12.9 percent over the prime rate, while a “reward card” had a range of 8.99 percent to 12.99 percent over the prime rate. Two issuers told us that their student rate was consistent with those offered to nonstudents with similar risk profiles—typically customers with little credit experience. Five issuers told us that they set special low credit limits (between $200 and $2,000) for college students and adjusted these limits upward over time if the student’s credit performance was satisfactory . One issuer told us that the factors considered in raising credit limits included the length of time the account had been open, how it had been used, and the payment history, regardless of account type, while another issuer set credit limits according to the students’ year in school ($700 for freshmen and sophomores, $800 for juniors, and $900 for seniors). They said that credit- limit increases were granted only to select customers who had demonstrated financial responsibility and were at low risk of default in the future. Students who maintained a banking relationship with this issuer were given higher credit limits. This issuer said that in general it gave nonstudents higher credit limits and “more aggressive” increases than students. Two other issuers set credit limits for college students at anywhere from $200 to $2,000, depending on factors such as credit experience, past performance, class year, and creditworthiness. One of these issuers said that credit line increases were based on factors such as payment history, account use, and external revolving debt. Still another issuer set even stricter credit limits for college students (from $500 to $1,000) and did not offer increases until a year after the card had been issued (the increases were generally $500 or less). Most of the card issuers in our study told us that they either provided credit counseling for college students who had trouble making payments or referred these students to credit counseling services. One issuer told us that they were willing to help students by lowering interest rates and adjusting payment schedules. The issuer said that when an account was delinquent, they worked with the student to determine the cause of the problem and take appropriate action. Another issuer told us that although students had primary responsibility for managing their accounts, the company was committed to assisting those who faced debt problems. Students could call the customer service number to discuss concerns about their debt, and a collections specialist would review the account and possibly reduce the interest rate or establish a minimum payment schedule. Another issuer told us that it also tried to assist customers experiencing financial difficulty by reducing interest rates and payments. Two issuers had a partnership with Consumer Credit Counseling Services, to which both students and nonstudents had access and which would attempt to work out a no-interest payment schedule. One issuer said that the company also connected customers with financial counseling organizations such as Myvesta (formerly Debt Counselors of America) that could help work out a budget for the student and negotiate a payment schedule. All six card issuers told us that they provided financial education information in various formats, including television commercials, magazine articles and advertising, brochures, and Web sites. Some of these credit education efforts were conducted in conjunction with Visa or Mastercard, and the information was directed at both college students and others with little credit experience. A credit card industry official explained these educational efforts by pointing out that the industry’s interests were not served by having its products misused. Although we were unable to determine the effectiveness of these credit education efforts and the extent to which they led to responsible credit behavior in college students, the information appeared to be widely accessible. Literature was disseminated in several ways. One issuer published a series of credit education brochures on topics such as money management, the cost of credit, and developing a credit history. College students received this information with their monthly billing statements every 3 months. Another issuer included a brochure on responsible credit use in “welcome packages” that were mailed to college students who received credit cards. An industry association official also told us that the association had worked with university officials to disseminate money- management literature at freshmen orientation. Other educational efforts relied on computers and presentations. Several of the issuers sponsored Web sites that had credit education components directed at college students, and one sponsored a Web site of the credit education program of the National Consumers League. Another issuer had an interactive CD-ROM that the company had developed with Visa to help consumers learn about personal finance, budgeting, money management, and decisionmaking. A third issuer maintained a full-time employee who traveled around the country conducting free financial literacy and responsibility seminars at universities. A fourth issuer had developed credit education seminars for educational institutions. Finally, a fifth issuer and a credit card association provided financial support for the Jumpstart Coalition, an organization that teaches young adults about personal finance. Studies we reviewed have shown that most college students have at least one credit card. In two nationwide studies, most students reported being able to manage their credit card debt—that is, they said they paid off their balances in full each month or carried a balance of between $1 and $1,000. However, one of the studies we reviewed showed that around 20 percent of students reported carrying a monthly balance of more than $1,000. A third smaller study of students seeking a particular type of loan reported an average balance of more than $2,700. Credit card debt combined with the expenses associated with leaving college and finding a job, including making payments on student loans, could lead those who leave college to debt repayment problems in the future. Credit cards were not a new phenomenon for most college students. More than one-third of students had credit cards before they entered college, and another 46 percent acquired them during the first year. Except for charges for tuition and fees, their spending patterns resembled those of nonstudents. University officials and credit counseling organizations worried that as inexperienced users, students would not understand the dangers of accumulating debt. In addition, one study suggested that students with four or more credit cards, with relatively high levels of debt and who charged their tuition and fees, could have trouble managing their credit card debts. We did not find a uniform response to the controversial issue of on- campus credit card marketing among the universities we visited. In response to complaints about aggressive marketing techniques, a few universities had adopted policies restricting credit card solicitation on campus. Several state legislatures had considered legislation limiting on- campus credit card marketing, and one legislature had passed such a bill. But many universities we visited allowed nonacademic entities, such as student unions and bookstores, to set their own policies. In many cases, alumni associations received significant income from credit card solicitation. The universities offered varying levels of educational information on managing finances and support for college students with credit card debt problems. Financial factors are one of many possible reasons that students leave college prior to graduation. The credit card issuers that responded to our inquiries participated actively in the student market, but again they did not have a uniform set of policies or practices. In general, college students were seen as a profitable market over the long term, with some issuers marketing to high-end schools. Some card issuers treated college students as a special category, while others did not. Many issuers adjusted their underwriting standards for students, enabling college students with little or no employment income to obtain credit cards. The card issuers that responded to us were also willing to work with students who had trouble managing their credit card debt, offering options that ranged from credit counseling to reduced interest rates and extended payment plans. Credit cards offer clear advantages to college students because they provide an interest free loan for students until the payment is due and a convenient noncash payment option for both routine transactions and emergencies. If used responsibly, credit cards allow students to build up credit histories that will facilitate increased access to credit in the future. However, if college students have not learned financial management skills in their secondary education or from their parents and misuse their credit cards or mismanage their credit card debt, the disadvantages can outweigh the advantages. Many college students are responsible for making important financial decisions for the first time in their lives and are naïve about managing a budget. As is true with any credit card user, using credit cards to make impetuous purchases can lead to extended repayment periods and high interest charges. Because of inexperience with credit and finance, some college students may not be financially literate and may be at greater risk of substantial debt burdens than more experienced consumers. Consistent misuse of credit cards by college students— particularly combined with student loan debt—could lead to substantial debt burdens. We obtained comments on a draft of this report from representatives of the credit card issuers and Visa and MasterCard officials; Student Monitor, TERI/IHEP, and Nellie Mae officials; Board of Governors of the Federal Reserve System’s Division of Consumer Affairs, the Federal Reserve Bank of Philadelphia, the Office of the Comptroller of the Currency; and the 12 universities we visited. The credit card issuers and their association officials raised three points, which we summarize below. First, the credit card industry officials said that the report conclusions, based on the opinions of university officials, students, and credit counseling representatives, were not necessarily an accurate reflection of all students’ experiences on that campus or the broader experience nationwide. Our research at universities was designed to obtain information about how 12 selected universities were dealing with credit card issues, as we stated in the draft, and not intended to be a sample projectable to universities as a whole. The sample included a variety of 4-year universities around the country based upon various criteria. We stratified universities according to whether or not they were public or private, geographic region, admissions policies, size and composition of their student body, cost of attendance, and the existence of any affinity relationship with a credit card issuer. University officials spoke to us in their official capacities. Many of the university officials we spoke with had experience at other universities prior to assuming their current position. Our fieldwork showed university officials struggling to find an appropriate balance between a university as a marketplace of ideas and a marketplace for commerce. We spoke with the presidents of five state student associations, two of whom were from some of the largest states in the country with many universities and college students. We also spoke with representatives of three consumer groups in three geographically different sections of the United States. Second, the card issuers and their association representatives questioned our focus on the Nellie Mae study of credit card usage because it was not based on a random sample representative of the U.S. student population. Our draft report noted that the Nellie Mae study was limited to a subset of students who applied for a certain type of student loan. We expanded language about the study’s limitations to the Results in Brief section of this report. Third, the card issuers and their association representatives objected to references in the draft to increased bankruptcies among 18 to 25 year olds, on the grounds that there is no reliable information indicating that the decision to file for bankruptcy resulted from credit card debt incurred while these individuals were college students. We agree with the representatives and the draft did not state that the increased bankruptcies among 18 to 24 year olds are the result of credit card debt. Our report does state that none of the potential sources of bankruptcy data that we contacted were able to provide or direct us to data indicating the number of college student bankruptcy filings. We understand that many bankruptcies are associated with significant life events such as a job loss or medical issue. However, it is reasonable to assume that in some, if not many cases, credit card debts are a portion of the debts on which bankruptcy filings are made. Whether or not credit card debt is a cause of bankruptcy filings has been the subject of academic research. The card issuers and their representatives also gave us a variety of technical suggestions, which we incorporated, as appropriate. Officials from the 12 universities we visited reviewed the university section of the report. All agreed with our presentation of the information they provided and agreed that we accurately reported the views they shared based on their experiences. We have incorporated their suggestions and technical comments, as appropriate. Student Monitor, IHEP, and Nellie Mae officials reviewed portions of this report that reported on the methodology and results of their studies. They made technical suggestions concerning our reporting of their results, which we incorporated. We also obtained comments from officials of the Division of Consumer and Community Affairs of the Board of Governors of the Federal Reserve System as well at the Federal Reserve Bank of Philadelphia, and the Office of the Comptroller of the Currency. All of these officials gave us technical comments on selected pages concerning how federal bank regulators view college student credit card portfolios in the context of a risk-based bank examination, the advantages and disadvantages of credit cards for college students as well as current law and legislation. We are not making recommendations in this report. As agreed with your offices, unless you publicly release its contents earlier, we plan no further distribution of this report until 30 days from its issuance date. At that time we will send copies to congressional committees and copies will be made available to others upon request. Major contributors to this report are listed in appendix VI. If you or your staff have any questions about this report, please contact me or Katie Harris, Assistant Director on (202) 512-8678. We were asked to respond to several concerns surrounding college students’ use of credit cards. To meet this request, we examined (1) the advantages and disadvantages credit card use presents to college students and available bankruptcy data, (2) the results of key studies showing how college students acquire and use credit cards and how much credit card debt they carry, (3) universities’ policies and practices related to on- campus credit card marketing, and (4) the business strategies and educational efforts credit card issuers direct at college students. We could not address some specific questions posed by the requesters because we were not able to obtain access to the account data of major credit card issuers or specific information on the underwriting policies and practices. As noted below, we are continuing negotiations with a group of credit card issuers in an effort to develop a mutually agreeable arrangement regarding access to appropriate data. To describe the advantages and disadvantages of credit card use for college students, we interviewed officials from universities and credit card issuers, as well as representatives of student groups. We also collected and analyzed information from the credit card industry, universities, student groups, and consumer groups including Myvesta, the Public Interest Research Group, Auriton Solutions (affiliated with the Association of Independent Consumer Credit Counseling Agencies and other organizations), and the National Consumer Law Center. To identify data on college student bankruptcy filings, we contacted officials at the U.S. Department of Education, Administrative Office of the U.S. Courts, U.S. Office for Trustees, and an academic who has conducted empirical research on consumer bankruptcy issues. To learn how college students acquire and use credit cards and how they manage credit card debt, we searched for studies on college students’ experiences with credit cards—how students acquired and used cards and paid their credit card bills. We selected and analyzed three studies to highlight in this report. Two of them (one by the nonprofit, nonpartisan groups The Education Resources Institute and Institute for Higher Education Policy and one by a marketing research firm, Student Monitor) were selected because their surveys were based on random, statistically valid samples of larger and broadly defined populations of college students in the United States. These two studies were limited by the fact that their surveys relied on self-reporting from the students, and research suggests that respondents tend to underreport information that could reflect badly on them—for example, indebtedness. We selected a third survey done by Nellie Mae—a national provider of higher education loans for students and parents—because its research was based on credit reports and not self-reported data. This study was limited by sample selection bias, as the sample was drawn from only those students who applied for a certain type of private loan. These students either did not qualify for federal student loans or had already received the maximum amount available to them. It is not clear how those who apply for such private loans from Nellie Mae are similar to or different from other college students in the United States. We discussed the methodology and results of these three studies with officials of the sponsoring organizations. We also discussed some or all of these studies with an academic expert, officials of the American Council on Education, and the USA Group, a guarantor and administrator of student loans. The three studies share limitations common to this kind of research (fig. 14 provides details of the studies’ methodologies). Two of the studies relied on self-reports of personal financial information and may suffer to some degree from errors in lack of memory, poor estimates, and underreporting of credit card balances owing to the social stigma of being in credit card debt. The practical difficulties of conducting such surveys— such as obtaining a sample that covers the entire population under consideration and gaining the cooperation of enough of the sample to make it representative—may also limit the usefulness of these results. Response rates of the two surveys were not reported. A low response rate may jeopardize the representativeness of a sample survey. The Nellie Mae study, which relied on credit bureau reports, avoided the problems that are common to surveys that rely on reports from individuals. But it is restricted to a special subpopulation of loan applicants, and the results are probably not representative of a larger and typical college-student population as a whole. We identified other studies on credit cards and college students, but these reports used methodologies that did not include random sampling techniques (app. III). We could not draw inferences from these reports about the student population as a whole or even about a specific subset of students. For this reason, these studies were not included in the main part of this report. To describe universities’ responses to credit card marketing, we judgmentally selected and visited 12 colleges and universities and conducted about 100 structured interviews. We also collected documentation at universities including university policies, credit education materials, and credit card applications. We observed tabling and other marketing directed at college students on these campuses. We compiled a list of colleges and universities chosen for their status as public or private institutions, their geographic region, their admissions policies, the size and composition of their student body, the cost of attendance, and the existence of an affinity relationship with a credit card issuer. We attempted to visit a varied sample of 4-year colleges and universities. Nine of the 12 universities we selected were public and 3 were private. Five had more selective or most selective admissions standards, according to college entrance test scores, and seven were less selective. Six of the universities had small- to medium-sized undergraduate student body populations—10,000 or fewer students—and 6 were large universities— greater than 10,000 students. Three of the universities had substantial minority student populations—Hispanic, Asian, and others—and one of the colleges was a historically African American school. We interviewed about 100 university officials from a number of university administrative offices (the dean of students and heads of student affairs, bursar, comptroller, and financial aid), as well as officials from student unions, alumni associations, athletic departments, bookstores, student governments, and others including some credit union officials. On campuses, we collected credit card applications from various locations, including student unions, alumni association offices, credit unions and other private financial service providers, and bookstores. We obtained and analyzed university documents relating to policies on credit card solicitation on campus, financial education at freshman orientation, and other issues. We did not verify the accuracy of the testimonial and documentary information university officials provided. To describe the business strategies and practices of credit card issuers— marketing, underwriting, and educational efforts—we selected 12 of the 20 largest credit card issuers in the United States. With one exception, all the issuers marketed credit cards to college students. We included two credit card companies that issued affinity cards through university alumni associations and athletic departments and a regional financial services company that did not market nationally. In October 2000, we sent the issuers a letter requesting data and an opportunity to discuss issues related to college students and credit cards. This letter included a draft pledge of confidentiality that we were prepared to sign, a signed pledge of confidentiality from our requesters, and a request for aggregate account data from college students and other consumer group accounts. Card issuer participation in our study was strictly voluntary; we have no legal right of access to their account data or other business information. Several card issuers chose not to meet with us, and after 4 months of attempting to arrange meetings, we had met with only five. The issuers that did agree to meet with us would generally discuss their marketing and educational efforts and were not inclined to discuss their underwriting practices, citing the proprietary nature of the information and issues of “business competition.” One card issuer declined to meet with us or answer our questions. Due to confidentiality concerns, all of the issuers declined to allow us access to data on their college student accounts in a manner that would allow us to verify authenticity. In January 2001, we asked 10 of the 12 card issuers to provide written answers to questions about their business strategies and educational efforts directed at college students; six responded. (App. V lists the questions we asked the nine issuers.) To address the educational efforts of the credit card industry, we also met with Visa and MasterCard officials and reviewed documentation they provided. We did not verify the accuracy of the testimonial and documentary information that credit card issuers provided and some information issuers provided did not respond precisely to our questions. In declining to provide us direct access to data about college student credit cards, the issuers cited their concerns about the proprietary and confidential nature of their data. However, after we addressed these concerns, in January 2001, eight credit card issuers expressed willingness to participate in a study of account data that would compare college students with other groups. Coordinating through the Consumer Bankers Association, these issuers offered to have a third party of their choosing do a study based on their data. We accepted the idea of a third-party contractor assembling a database drawn from the issuers’ account data. To meet our auditing standards, it will be necessary for us to retain and supervise a contractor independent of the credit card industry. Government auditing standards require that “in matters relating to the audit work, the audit organization and the individual auditors, whether government or public, should be free from personal and external impairments to independence, should be organizationally independent, and should maintain an independent attitude and appearance.” As of May 2001, we are continuing to explore the feasibility of using an independent contractor to create a database with verified data provided by the eight credit card issuers that we can analyze. We also met with federal bank regulatory officials from the Board of Governors of the Federal Reserve System, the Federal Reserve Bank of Philadelphia, and OCC. We met with OCC and the Federal Reserve Board because they oversee most of the large credit card issuers. We discussed with the Federal Reserve and OCC the credit card industry in general and the issue of credit cards and college students in particular, as well as applicable laws, disclosure requirements, and examination practices. We also spoke with officials from the Federal Trade Commission, Associated Credit Bureaus, American Bankruptcy Institute, VISA, and MasterCard. The Federal Trade Commission has enforcement responsibility for lenders that are not under the supervision of another federal agency. We obtained comments on a draft of this report from representatives of the credit card issuers who participated in this study and the Consumer Bankers Association, and from officials of the Board of Governors of the Federal Reserve System and the universities we visited. We incorporated technical comments as appropriate. We conducted our review at credit card issuers and universities in various cities and states around the United States. To maintain the confidentiality of the issuers and universities, we are not disclosing the names of the states and cities where we did our field work. We conducted our work between July 2000 and April 2001 in accordance with generally accepted government auditing standards. This appendix presents information about state legislation related to credit card solicitation on college and university campuses. We obtained basic information about legislative activity from the National Conference of State Legislatures, secured additional information about individual bills from sponsors or other knowledgeable sources in the individual state legislatures, reviewed current information published by state legislatures, and reviewed information on state legislation found in Lexis databases. Proposed legislation and resolutions were introduced in at least 24 states from 1999 through Mid-May 2001. Legislative provisions range from requests to study the effects of credit cards on college students to proposals limiting solicitation on campuses. Three bills, one in Arkansas and two in Louisiana were enacted. Legislators we spoke with told us that the impetus for their proposed legislation included complaints from parents of college students and from student groups, as well as negative media reports about credit card solicitation on college campuses. Legislatures in five states proposed studies of credit cards on college and university campuses. The proposed legislation in several states would regulate credit card solicitation in a variety of ways including 1. a ban on the use of incentives to entice students to apply for credit 2. a requirement that a student’s parent or legal guardian give written consent to the student’s credit card application; 3. a provision to protect parents of college students from the debt collection actions of credit card issuers; 4. a requirement that credit card issuers register with the college or university before soliciting on campus; 5. a requirement that credit card issuers, universities, or organizations provide debt education materials or a program for students; 6. a provision that colleges, universities, or education departments set policies and procedures for controlling credit card solicitation on campus; and 7. a prohibition against the dissemination of information on students to credit card issuers or extenders of credit for compensation. To better understand how students acquire and use credit cards, we conducted a literature search. In addition to the studies we used in our report, we found three studies that reported survey results on how students acquire credit cards, how they use them, and how much debt they incur. We did not include the results in our report because the survey methodology of these studies did not employ random sampling techniques that would allow us to draw inferences about the student population as a whole or even about a specific subset of students. For example, the sample size may have been too small, or the observations may have come from populations that were not random, such as students at a particular college or members of a particular organization. We briefly describe these studies below. In 1998, the U.S. Public Interest Research Group (PIRG) published a survey of 1,260 undergraduate students. PIRG student volunteers on campus asked students with credit cards to fill out surveys in student centers. In addition, over the summer a survey was randomly distributed to students working in PIRG offices around the country. Among the findings PIRG reported were: Students responsible for their own credit cards who obtained cards at campus tables had more cards (2.6) and higher unpaid balances ($1,039) than those who did not (2.1 and $854). Among students responsible for their own credit cards, more of those who obtained cards at campus tables reported carrying unpaid balances (42 percent) than those who did not (35 percent). Most students surveyed (69 percent) obtained credit cards in their names, and the others (31 percent) said their parents paid their primary bills or were cosigners on at least one of their cards. Of those students who obtained cards in their names, only 15 percent reported holding a full-time job when they applied. In 1999, Robert D. Manning of Georgetown University reported the results of a study of students from four universities in the Washington, D.C. area.The report covered more than 350 interviews and 400 surveys; some of the surveys were from students walking past one campus building, and others were given to students taking an Introduction to Sociology class. The study, which reported results by school, found that 91 to 96 percent of the students had credit cards and that 53 percent had revolving credit card debt. In the spring of 1998, the Boynton Health Service of the University of Minnesota, Twin Cities Campus, conducted a mail survey of 1,000 undergraduate and graduate University of Minnesota students on a variety of subjects, including credit cards. About 57 percent of the recipients responded to the survey. Most respondents had at least one credit card. Nearly 25 percent of respondents had credit card debt in excess of $1,000. The researchers found that students who used alcohol and tobacco and who worked more than 40 hours a week had more credit card debt than those who did not. In addition to those named above, Patrick Dynes, Janet Fong, Elizabeth Olivarez, and Robert Pollard made key contributions to this report. | Credit cards offer clear advantages to college students because they provide an interest free loan until the payment is due and a convenient noncash payment option for both routine transactions and emergencies. If used responsibly, credit cards allow students to build up credit histories that will increase their access to credit in the future. However, if college students have not learned sound financial management skills in high school or from their parents, the disadvantages of credit cards can outweigh the advantages. GAO found that more than one-third of students had credit cards before they entered college, and another 46 percent acquired them during the first year. Except for charges for tuition and fees, their spending patterns resembled those of nonstudents. GAO did not find a uniform response to the controversial issue of on-campus credit card marketing among the universities GAO visited. In response to complaints about aggressive marketing techniques, a few universities have restricted credit card solicitation on campus. The credit card issuers that responded to GAO's inquiries participated actively in the student market, but they did not have a uniform set of policies or practices. |
Since 1998, the results from state surveys of nursing homes have been the principal source of public information on nursing home quality, which is posted and routinely updated on CMS’s Nursing Home Compare Web site. Under contract with CMS, states are required to conduct periodic surveys that focus on determining whether care and services meet the assessed needs of the residents and whether homes are in compliance with federal quality requirements, such as preventing avoidable pressure sores, weight loss, or accidents. During a survey, a team that includes registered nurses spends several days at a home reviewing the quality of care provided to a sample of residents. States are also required to investigate complaints lodged against nursing homes by residents, families, and others. In contrast to surveys, complaint investigations generally target a single area in response to a complaint filed against a home. Any deficiencies identified during routine surveys or complaint investigations are classified according to the number of residents potentially or actually affected (isolated, pattern, or widespread) and their severity (potential for minimal harm, potential for more than minimal harm, actual harm, and immediate jeopardy). To improve the rigor of the survey process, HCFA contracted for the development of quality indicators and required their use by state surveyors beginning in 1999. Quality indicators are derived from data collected during nursing homes’ assessments of residents, known as the minimum data set (MDS). The MDS contains individual assessment items covering 17 areas, such as mood and behavior, physical functioning, and skin conditions. MDS assessments of each resident are conducted in the first 14 days after admission and periodically thereafter and are used to develop a resident’s plan of care. Facility-reported MDS data are used by state surveyors to help identify quality problems at nursing homes and by CMS to determine the level of nursing home payments for Medicare; some states also use MDS data to calculate Medicaid nursing home payments. Because it also envisioned using indicators to communicate nursing home quality to consumers, HCFA recognized that any publicly reported indicators must pass a very rigorous standard for validity and reliability. Valid quality indicators that distinguish between good and poor care provided by nursing homes would be a useful adjunct to existing quality data. Such indicators must also be reliable—that is, they must consistently distinguish between good and bad care. HCFA contracted with Abt to review existing quality indicators and determine if they were suitable for public reporting. Abt catalogued and evaluated 143 existing quality indicators, including those used by state surveyors. It also identified the need for additional indicators both for individuals with chronic conditions who are long-term residents of a facility and for individuals who enter a nursing home for a short period, such as after a hospitalization (a postacute stay). According to Abt, a main concern about publicly reporting quality indicators was that the quality indicator scores might be influenced by other factors, such as residents’ health status. Abt concluded that the specification of appropriate risk adjustment models was a key requirement for the validity of any quality indicators. Risk adjustment is important because it provides consumers with an “apples-to-apples” comparison of nursing homes by taking into consideration the characteristics of individual residents and adjusting quality indicator scores accordingly. For example, a home with a disproportionate number of residents who are bedfast or who present a challenge for maintaining an adequate level of nutrition—factors that contribute to the development of pressures sores—may have a higher pressure sore score. Adjusting a home’s quality indicator score to fairly represent to what extent a home does—or does not—admit such residents is important for consumers who may wish to compare one home to another. After several years of work, Abt recommended 39 risk-adjusted quality indicators to CMS in October 2001. Twenty-two were based on existing indicators and the remaining 17 were newly developed by Abt, including 9 indicators for nursing home residents with chronic conditions and 8 indicators for individuals who enter a nursing home for a short period. In September 2001, CMS contracted with the NQF to review Abt’s work with the objective of (1) recommending a set of quality indicators for use in its planned six-state pilot and (2) developing a core set of indicators for national implementation of the initiative scheduled for late 2002. NQF established a steering committee to accomplish these two tasks. The steering committee met in November 2001 and identified 11 indicators for use in the pilot, 9 of which were selected by CMS. The committee made its selection from among Abt’s list of 39 indicators but it did not recommend use of Abt’s risk-adjustment approach. Moreover, the steering committee indicated that it would not be limited to the same Abt list in developing its recommended core set of indicators for national implementation. In April 2002, NQF released a draft consensus report identifying the indicators it had distributed to its members and the public for comment on their potential inclusion in the national implementation. Under its contract, NQF was scheduled to make a final recommendation to CMS prior to the national reporting of quality indicators. CMS’s initiative to augment existing public data on nursing home quality has considerable merit but more time is needed to assure that the indicators proposed by CMS for public reporting are appropriate in terms of their validity and reliability. Based on work by Abt to validate the indicators it developed for CMS, the agency selected quality indicators for national reporting. The full Abt validation report—which is important for a thorough analysis of the appropriateness of the quality indicators--was still not available to us as of October 28, 2002. Our review of available portions of the Abt report, however, raised serious questions about whether testing and validation of the selected indicators has been sufficient to move forward with national reporting at this time. Moreover, CMS plans to initiate national reporting before it receives recommendations from NQF, its contractor, on appropriate quality indicators. On August 9, 2002, CMS announced the 10 indicators selected for its nationwide reporting of quality indicators, which it plans to launch in mid- November 2002. CMS selected these indicators from those that Abt had validated in its August 2, 2002, validation report. Abt classified the indicators it studied as to the degree of validity—top, middle, or not valid. The indicators that CMS selected were in the top category with one exception—residents in physical restraints—which was in the middle category. The objective of Abt’s validation study was to confirm that the indicators reflect the actual quality of care that individual nursing facilities provide, after taking into account resident and facility-level characteristics. For example, a validation analysis could confirm that a low percentage of pressure sores among residents was linked to a facility’s use of procedures to prevent their development. Successful validation reduces the chance that publicly reported data could misrepresent a high-quality facility as a low-quality facility—or vice versa. CMS’s decision to implement national reporting in November 2002 is troubling, given the issues raised by our review of the available portions of Abt’s validation report. Although we asked CMS for a copy of Abt’s 11 technical appendixes, as of October 28, 2002, they were still undergoing review and were not available to us. The technical appendixes are essential to adequately understand and evaluate Abt’s validation approach. Our review of the available portions of the Abt report raised serious questions about whether the effort to date has been sufficient to validate the indicators. The validation study is based on a sample that is drawn from six states; it is not representative of nursing homes nationwide and may not be representative of facilities in these six states. Selected facilities were allowed to decline participation and about 50 percent did so. For those facilities in the validation study, Abt deemed most of the indicators as valid—that is, better care processes were associated with higher quality indicator scores, taking into account resident and facility-level characteristics. However, we could not evaluate these findings because Abt provided little information on the specific care processes against which the indicators were validated. Unresolved questions also exist about the risk adjustment of the quality indicators. Risk adjustment is a particularly important element in determining certain quality indicators because it may change the ranking of individual facilities—a facility that is among the highest on a particular quality indicator without risk adjustment may fall to the middle or below after risk adjustment—and vice versa. Data released by CMS in March 2002 demonstrated that Abt’s risk adjustment approaches could either lower or raise facility scores by 40 percent or more. Although such changes in ranking may be appropriate, Abt did not provide detailed information on how its risk adjustment approaches changed facility rankings or a basis for assessing the appropriateness of the changes. In addition to the questions raised by our review of the Abt validation report, CMS is not planning to wait for the expert advice it sought on quality indicators through its contract with the NQF. Under this contract, the NQF steering committee issued a consensus draft in April 2002 with a set of potential indicators for public reporting. The steering committee had planned to complete its review of these indicators using its consensus process by August 2002. In late June, however, CMS asked NQF to delay finalizing its recommendations until early 2003 to allow (1) consideration of Abt’s August 2002 report on the validity of its indicators and risk- adjustment methods—including the technical appendices, when they become available and (2) a review of the pilot evaluation results expected in October 2002. An NQF official told us that the organization agreed to the delay because the proposed rapid implementation timeline had been a concern since the initiative’s inception. CMS’s list of quality indicators for the November 2002 national rollout did not include six indicators under consideration by NQF—depression, incontinence, catheterization, bedfast residents, weight loss, and rehospitalization (see app. I). Instead, CMS intends to consider NQF’s recommendations and revise the indicators used in the mid-November national rollout sometime next year. CMS is also moving forward without a consensus on risk adjustment of quality indicators. CMS is planning to report one indicator with facility- level adjustment based on a profile of residents’ status at admission, and two indicators both with and without this Abt-developed risk adjuster. However, both Abt and NQF have concluded that adjusting for the type of residents admitted to the nursing home required further research to determine its validity. We believe that reporting the same indicator with and without facility-level risk adjustment could serve to confuse rather than help consumers. Two of the three consultants hired by NQF specifically recommended against the use of facility-level adjustments in public reporting at this time. We also found that, as of October 1, 2002, CMS had not reached internal consensus on how to describe the risk- adjustment methods used in each of the 10 indicators it plans to begin reporting nationally in November 2002. Several agency officials agreed with our assessment that the descriptions on its Web site were inconsistent with Abt’s own descriptions of the risk adjustment associated with each indicator. Two different Abt studies have presented CMS with conflicting messages about the accuracy of MDS data. Abt’s August 2002 quality indicator validation report suggested that the underlying data used to calculate most indicators were, in the aggregate, very reliable. However, our analysis of more detailed facility-level data in a February 2001 Abt report raised questions about the reliability of some of the same MDS data. Because MDS data are used by CMS and some states to determine the level of nursing home payments for Medicare and Medicaid and to calculate quality indicators, ensuring its accuracy at the facility level is critical both for determining appropriate payments and for public reporting of the quality indicators. Recognizing the importance of accurate MDS data, CMS is in the process of implementing a national MDS accuracy review program expected to become fully operational in 2003, after the nationwide reporting of quality indicators begins in November 2002. We recently reported that CMS’s review program is too limited in scope to provide adequate confidence in the accuracy of MDS assessments in the vast bulk of nursing homes nationwide. Abt’s August 2, 2002, validation report concluded that the reliability of the underlying MDS data used to calculate 39 quality indicators ranged from acceptable to superior, with the data for only 1 indicator proving unacceptable. Abt’s findings were based on a comparison of assessments conducted by its own nurses to assessments performed by the nursing home staff in 209 sample facilities. For each quality indicator, Abt reported the overall reliability for all of the facilities in its sample. However, because quality indicators will be reported for each nursing home, overall reliability is not a sufficient assurance that the underlying MDS data are reliable for each nursing home. Although Abt did not provide information on MDS reliability for individual facilities, it noted that reliability varied considerably within and across states. Earlier work by Abt and others calls into question the reliability of MDS data. Abt’s February 2001 report on MDS data accuracy identified significant variation in the rate of MDS errors across the 30 facilities sampled. Differences between assessments conducted by Abt’s nurses and the nursing home staff were classified as errors by Abt. Error rates for all MDS items averaged 11.7 percent but varied across facilities by a factor of almost two—from 7.8 percent to 14.5 percent. As shown in figure 1, the majority of error rates were higher than 10.5 percent. Furthermore, error rates for some of the individual MDS items used to calculate the quality indicators were much higher than the average error rate. According to Abt, the least accurate sections of the MDS included physical functioning and skin conditions. Abt also noted that there was a tendency for facilities to underreport residents with pain. MDS items from these portions of the assessment are used to calculate several quality indicators that CMS plans to report nationally in November 2002—activities of daily living, pressure sores, and pain management. Table 1 shows that the error rate across the residents sampled ranged from 18 percent for pressure sores to 42 percent for pain intensity. Abt’s February 2001 findings were consistent with areas that states have identified as having a high potential for error, including activities of daily living and skin conditions. Moreover, a study by the HHS Office of Inspector General (OIG), which identified differences between the MDS assessment and the medical record, found that activities of daily living was among the areas that provided the greatest source of differences. In addition, the OIG report noted that 40 percent of the nursing home MDS coordinators it surveyed identified the physical functioning section, used to calculate the quality indicator on activities of daily living, as the most difficult to complete. Some coordinators explained that facility staff view a resident’s capabilities differently and thus the assessments tend to be subjective. As part of CMS’s efforts to improve MDS accuracy, its contractor is still field-testing the on-site aspect of its approach, which is not expected to be implemented until 2003. Although Abt’s February 2001 report found widespread MDS errors, CMS intends to review roughly 1 percent of the MDS assessments prepared over the course of a year, which numbered 14.7 million in 2001. Moreover, only 10 percent of the reviews will be conducted on-site at nursing homes. In contrast, our prior work on MDS found that 9 of the 10 states with MDS-based Medicaid payment systems that examine MDS data’s accuracy conduct periodic on-site reviews in all or a significant portion of their nursing homes, generally examining from 10 to 40 percent of assessments. On-site reviews heighten facility staff awareness of the importance of MDS data and can lead to the correction of practices that contribute to MDS errors. We reported earlier that CMS’s approach may yield some broad sense of the accuracy of MDS assessments on an aggregate level but is insufficient to provide confidence about the accuracy of MDS assessments in the vast bulk of nursing homes nationwide. While CMS is strongly committed to making more information available to the public on nursing home quality and such an initiative has considerable merit, the agency had not demonstrated a readiness to assist the public in understanding and using those data. We found that CMS’s reporting of quality indicators in the six pilot states was neither consumer friendly nor reported in a format consistent with the data’s limitations, implying a greater degree of precision than is currently warranted. Our analysis of the data currently available in the six pilot states demonstrated the potential for public confusion over both the quality indicators themselves and inconsistencies with other available data on deficiencies identified during nursing home surveys—which, to date, are the primary source of public data on nursing home quality. Moreover, our phone calls to the Medicare and QIO toll-free numbers revealed that CMS was not adequately prepared to address consumers’ questions raised by discrepancies between conflicting sources of quality data. Our review of the quality indicators on the CMS Web site found that the presentation of the data was not consumer friendly and that the reporting format implies a greater confidence in the data’s precision than may be warranted at this time. Quality indicators are reported as the percentage of residents in a facility having the particular characteristics measured by each indicator. The Web site explains that having a low percentage of residents with pressure sores or pain is better than having a high percentage. In the six-state pilot, the public can compare a nursing home’s score to the statewide and overall average for each quality indicator. We believe that equating a high score with poor performance is counterintuitive and could prove confusing to consumers. Despite the Web site’s explanation of how to interpret the scores, the public might well assume that a high score is a positive sign. In addition, reporting actual quality indicator scores rather than the range of scores a home falls into for an indicator—a low, medium, or high score— can be confusing and implies a confidence in the precision of the results that is currently a goal rather than a reality. Consumers will find it difficult to assess a home with a score that is 5 to 10 percentage points from the state average. Such a home could be an outlier—one of the best or the worst on that indicator; alternatively, it could be that the home was close to the state average because the outliers involved much larger differences. Concerns about the validity of the indicators and the potential reliability of the data make comparisons of homes with similar scores questionable. Consumers may be misled if a difference of several percentage points between two homes is perceived as demonstrating that one is better or worse than the other. To partially address these types of concerns, Maryland has reported quality indicator data on its own Web site since August 2001 in ranges rather than individual values. Thus, it indicates if a facility falls into the bottom 10 percent, the middle 70 percent, or the top 20 percent of facilities in the state. Consumers may also be confused about how to interpret missing information. Although the CMS Web site explains that quality indicator scores are not reported for nursing homes with too few residents, it does not acknowledge the extent of such missing data. We found that 6 percent of all nursing homes in the six pilot states have no score for any of the nine quality indicators and that, for individual indicators, from 9 percent to 40 percent of facilities have missing scores (see table 2). When data for homes of potential interest to consumers are not reported, consumers may need some assistance in how to incorporate such instances into their decisionmaking. Consumer confusion may also occur when quality indicator scores send conflicting messages about the overall quality of care at a home. We found that the Web site data for a significant number of facilities contained such inconsistencies. Seventeen percent of nursing homes in the six pilot states had an equal number of highly positive and highly negative quality indicator scores. We defined highly positive scores as those indicating that a facility was among the 25 percent of homes with the lowest percentage of residents exhibiting poor outcomes, such as a decline in their ability to walk or use the toilet. In contrast, facilities with a highly negative score were among the top 25 percent of homes with poor outcomes. We also found that 37 percent of nursing homes with four or more highly positive quality indicator scores had two or more highly negative scores. In addition, our comparison of survey deficiency data available on the Web site with quality indicator scores also revealed inconsistencies. For example, 17 percent of nursing homes with four or more highly positive quality indicator scores and no highly negative scores—seemingly “good” nursing homes—had at least one serious quality-of-care deficiency on a recent state survey. We have found that serious deficiencies cited by state nursing home surveyors were generally warranted and indeed reflected instances of documented actual harm to nursing home residents. Moreover, 73 percent of nursing homes with four or more highly negative quality indicator scores—seemingly “bad” facilities—had no serious quality-of-care deficiencies on a recent survey (see table 3). The latter situation is consistent with our past work that surveyors often miss serious quality-of-care problems. Nevertheless, consumers will generally lack such insights on the reliability of state surveys that would permit them to better assess the available data on quality of care. With the apparent need for assistance to consumers in interpreting and using this information, the important role of the Medicare and QIO toll-free numbers is evident. We requested and reviewed copies of the Medicare hotline and QIO scripts and found that they did not address the issue of responding to questions about conflicting or confusing quality data. Furthermore, our calls to the Medicare hotline and to QIO toll-free numbers in the six pilot states demonstrated that the staff were not adequately prepared to handle basic questions about the quality data available under the pilot. CMS officials had told us that Medicare hotline callers with complicated questions would be seamlessly transferred to a QIO without having to hang up and call another number. Although we asked the Medicare hotline staff if another organization might be better able to respond to our questions, no one offered to refer us to QIOs, even when we specifically asked about them. In fact, one hotline staff member told us that a QIO would not be an appropriate referral. Consequently, we independently attempted to call the QIOs in the six pilot states. We found that it was difficult to reach a QIO staff member qualified to answer questions. Each QIO had a toll-free number but neither the automated recordings at four QIOs nor operators at the remaining two indicated that the caller had reached a QIO. In addition, the automated recordings did not contain a menu choice for questions about nursing home quality indicators. We were unable to contact one QIO because the hotline had neither an operator nor a voice mail capability. On other calls, after reaching a QIO staff person, it frequently took several referrals to identify an appropriate contact point. One QIO took 5 working days for a staff member to call us back. Four of the five QIOs we contacted explained that their primary role was to work with nursing homes to improve quality of care. In general, QIO staff were not prepared to respond to consumer questions. Staff at the Medicare hotline and the QIOs varied greatly in their basic understanding of quality indicators and survey deficiencies. While two of the nine staff we contacted were generally knowledgeable about different types of quality data, others were unable to answer simple questions and the majority provided erroneous or misleading data. One QIO staff member told us that MDS data were not representative of all residents of a nursing home but only presented a “little picture” based on a few residents. However, assessments of all residents are taken into consideration in calculating quality indicators. When we expressed concern about a home identified on the Web site with a “level-3” deficiency, a Medicare hotline staff member incorrectly told us that it was not a serious deficiency because level 3 indicated potential harm. CMS designates actual harm deficiencies as “level-3” deficiencies. A QIO staff member incorrectly told us that actual harm pressure sore deficiencies had nothing to do with patient care and might be related to paperwork. Our review of survey reports has shown that actual harm deficiencies generally involved serious quality-of-care problems resulting in resident harm. Generally, hotline staff did not express a preference for using either nursing home surveys or quality indicators in choosing a nursing home. Two QIO staff, however, stated that the nursing home survey information gave a better picture of nursing home care than the quality indicators, which they judged to be imprecise and subject to variability. CMS’s evaluation of the pilot is limited and will not be completed prior to national reporting of quality indicators because of the short period of time between the launch of the pilot and the planned November 2002 national implementation. According to CMS officials, the pilot evaluation was never intended to help decide whether the initiative should be implemented nationally or to measure the impact on nursing home quality. While CMS is interested in whether nursing home quality actually improves as a result of the initiative, it will be some time before such a determination can be made. Thus, CMS focused the pilot evaluation on identifying improvements that could be incorporated into the initiative’s design prior to the scheduled national implementation in November 2002. A CMS official told us that initial pilot evaluation results were expected by early October 2002, allowing just over a month to incorporate any lessons learned. In commenting on a draft of this report, CMS stated that it was using preliminary findings to steer national implementation. The final results of the pilot evaluation will not be completed until sometime in 2003. CMS’s evaluation of the pilot is focused on identifying how to communicate more effectively with consumers about the initiative and how to improve QIO interaction with nursing homes. Specifically, CMS will assess whether (1) the target audiences were reached; (2) the initiative increased consumer use of nursing home quality information; (3) consumers used the new information to choose a nursing home; (4) QIO activities influenced nursing home quality improvement activities; (5) nursing homes found the assistance provided by QIOs useful; and (6) the initiative influenced those who might assist consumers in selecting a nursing home, such as hospital discharge planners and physicians. Information is being collected by conducting consumer focus groups, tracking Web site “hits” and toll-free telephone inquiries, administering a Web site satisfaction survey, and surveying nursing homes, hospital discharge planners, and physicians. As of late August 2002, CMS teams were also in the process of completing site visits to stakeholders in the six pilot states, including QIOs, nursing homes, ombudsmen, survey agencies, nursing home industry representatives, and consumer advocacy groups. The teams’ objective is to obtain a first-hand perspective of how the initiative is working with the goal of implementing necessary changes and better supporting the program in the future. Although CMS’s initiative to publicly report nursing home quality indicators is a commendable and worthwhile goal, we believe that it is important for CMS to wait for and consider input from NQF and make necessary adjustments to the initiative based on its input. We believe several factors demonstrate that CMS’s planned national reporting of quality indicators in November 2002 is premature. Our review of the available portions of Abt’s validation report raised serious questions about whether the effort to date has been sufficient to validate the quality indicators. NQF was asked to delay recommending a set of indicators for national reporting until 2003, in part, to provide sufficient time for it to review Abt’s report. Although limited in scope, CMS’s planned MDS accuracy review program will not begin on-site accuracy reviews of the data underlying quality indicators until 2003. CMS’s own evaluation of the pilot, designed to help refine the initiative, was limited to fit CMS’s timetable for the initiative and the preliminary finding were not available until October 2002, leaving little time to incorporate the results into the planned national rollout. Other aspects of the evaluation will not be available until early 2003. We also have serious concerns about the potential for public confusion over quality data, highlighting the need for clear descriptions of the data’s limitations and easy access to informed experts at both the Medicare and QIO hotlines. CMS has not yet demonstrated its readiness to meet these consumer needs either directly or through the QIOs. To ensure that publicly reported quality indicator data accurately reflect the status of quality in nursing homes and fairly compare homes to one another, we recommend that the Administrator of CMS delay the implementation of nationwide reporting of quality indicators until there is greater assurance that the quality indicators are appropriate for public reporting—including the validity of the indicators selected and the use of an appropriate risk-adjustment methodology—based on input from the NQF and other experts and, if necessary, additional analysis and testing; and a more thorough evaluation of the pilot is completed to help improve the initiative’s effectiveness, including an assessment of the presentation of information on the Web site and the resources needed to assist consumers’ use of the information. CMS and the NQF reviewed and provided comments on a draft of this report. (See app. II and app. III, respectively). CMS reiterated its commitment to continually improve the quality indicators and to work to resolve the issues discussed in our report. Although CMS stated it would use our report to help improve the initiative over time, it intends to move forward with national implementation in November 2002 as planned. It stated that “waiting for more reliability, more validity, more accuracy, and more usefulness will delay needed public accountability, and deprive consumers, clinicians, and providers of important information they can use now.” The NQF commented that it unequivocally supports CMS’s plans to publicly report quality indicators but indicated that the initiative would benefit from a short-term postponement of 3 to 4 months to achieve a consensus on a set of indicators and to provide additional time to prepare the public on how to use and interpret the data. We continue to support the concept of reporting quality indicators, but remain concerned that a flawed implementation could seriously undercut support for and the potential effectiveness of this very worthwhile initiative. CMS’s comments and our evaluation focused largely on two issues: (1) the selection and validity of quality indicators, and (2) lessons learned from CMS’s evaluation of the pilot initiative. CMS asserts that the quality indicators it plans to report nationally are reliable, valid, accurate, and useful and that it has received input from a number of sources in selecting the indicators for this initiative. However, CMS provided no new evidence addressing our findings regarding the appropriateness of the quality indicators selected for public reporting and the accuracy of the underlying data. We continue to believe that, prior to nationwide implementation, CMS should resolve these open issues. CMS intends to move forward with nationwide implementation without a requested NQF assessment of the full Abt validation report and without NQF’s final recommendations on quality indicators. CMS would not share the technical appendices to Abt’s validation report with us because they were undergoing review and revision. The technical appendices are critical to assessing Abt’s validation approach. CMS’s comments did not address our specific findings on the available portions of Abt’s validation report, including: (1) the validation results are not representative of nursing homes nationwide because of limitations in the selection of a sample of nursing homes to participate in the validation study, and (2) Abt provided little information on the specific care processes against which the indicators were validated or how its risk adjustment approaches changed facility rankings and the appropriateness of the changes. Although both Abt and the NQF concluded that Abt’s facility-level risk adjustment approach required further research to determine its validity, CMS plans to report two indicators with and without facility-level adjustments. CMS’s comments indicated that it has chosen to report these measures both ways in order to evaluate their usefulness and to allow facilities and consumers the additional information. We continue to believe that reporting data of uncertain validity is inappropriate and, as such, will likely not be useful to either facilities or consumers. For quality indicators to be reliable, the underlying MDS data used to calculate the indicators must be accurate. CMS’s comments did not specifically address the conflicting findings on MDS accuracy from Abt’s August 2002 validation report and its February 2001 report to CMS. Abt’s August 2002 validation report concluded that, in aggregate, the underlying MDS data were very reliable but that the reliability varied considerably within and across states. Aggregate reliability, however, is insufficient because quality indicators are reported separately for each facility. In its February 2001 report to CMS, Abt identified widespread errors in the accuracy of facility-specific assessments used to calculate some of the quality indicators that CMS has selected for reporting in November. CMS indicated that its efforts since 1999 have improved MDS accuracy. But because CMS does not plan to begin limited on-site MDS accuracy reviews until 2003, there is little evidence to support this assertion. CMS commented that findings from a number of activities evaluating the six-state pilot were not available prior to the time we asked for comments on our draft report. While final reports are not yet available for some of these studies, CMS stated that the pilot allowed it to work through important issues and incorporate lessons learned before a national launch. We pointed out that the pilot evaluation was limited and incomplete—an additional reason to delay the initiative. CMS also did not evaluate a key implementation issue—the adequacy of assistance available to consumers through its toll-free telephone hotlines. Moreover, the lack of formal evaluation reports to help guide the development of a consensus about key issues, such as how quality indicators should be reported, is troubling. In its comments, CMS stated that it was committed to working aggressively to help the public understand nursing home quality information using lessons learned from the pilot. However, CMS learned about the flaws in its hotline operations not from its pilot evaluation but from our attempts to use the Medicare and QIO toll-free phone numbers to obtain information on quality data. Acknowledging the weaknesses we identified, the agency laid out a series of actions intended to strengthen the hotlines’ ability to respond to public inquiries, such as providing additional training to customer service representatives prior to the national launch of the initiative. CMS outlined other steps it plans to take such as providing its customer service representatives with new scripts and questions and answers to the most frequently asked questions. At the outset of the pilot in April 2002, CMS described seamless transfers from the Medicare to the QIO hotlines for complicated consumer questions but now acknowledges that limitations in QIO telephone technology prevent such transfers. Instead of automatic transfers, CMS stated that, when referrals to QIOs are necessary, callers will be provided with a direct toll-free phone number. CMS also commented that consumers should be encouraged to consider multiple types of information on nursing home quality. While we agree, we believe it is critical that customer service representatives have a clear understanding of the strengths and limitations of different types of data to properly inform consumers when they inquire. CMS commented that we offered no explanation of the analysis that led us to conclude that (1) consumers could be confused because scores on quality indicators can conflict with each other and the results of routine nursing home surveys, and (2) the public may confuse a high quality indicator score with a positive result. Our draft clearly states that our findings were based on our analysis of the quality indicator data and survey results available in the six pilot states—a database that CMS provided at our request. In its comments, CMS provided limited data to support its assertion that consumers are not confused by the quality indicators and are very satisfied with the current presentation on its Web site. According to CMS, over two-thirds of respondents to its August 2002 online satisfaction survey of randomly chosen users of Nursing Home Compare information said they were highly satisfied with the information, for example, it was clearly displayed, easy to understand, and valuable. It is not clear, however, that these responses were representative of all nursing home consumers accessing the Web site, as CMS implied. For example, CMS informed us that this survey was part of a larger survey of all Medicare Web site users, which had a low overall response rate of 29 percent. Moreover, of the 654 respondents to the Nursing Home Compare component of the survey, fewer than half (40 percent) were identified as Medicare beneficiaries, family members, or friends. NQF feedback to CMS on its Web site presentation was consistent with our findings. In commenting on our draft report, NQF noted that it had offered informal guidance to CMS, such as using positive or neutral wording to describe indicators, exploring alternative ways of presenting information about differences among facilities, and ensuring that the presentation of the data reflects meaningful differences in topics important to consumers. While justifying its current presentation of quality indicator data, CMS commented that it is seriously considering not reporting individual nursing home scores but rather grouping homes into ranges such as the bottom 10 percent, middle 70 percent, and top 20 percent of facilities in a state. Such a change, however, would not come before the national rollout. We agree with CMS that, when grouping homes into ranges, homes on the margin—close to the bottom 10 percent or top 20 percent—may not be significantly different from one another. However, the same is true of reporting individual facility scores. Moreover, reporting ranges more clearly identifies homes that are outliers for consumers. CMS also commented on our characterization of the scope of the nursing home quality initiative. CMS stated that we had narrowly framed the initiative as one designed solely for consumers, ignoring the QIO’s quality improvement activities with individual nursing homes requesting assistance. Our report acknowledged and briefly outlined the quality improvement role of the QIOs. However, based on our requestors’ concerns about the relatively short pilot timeframe prior to national implementation of public reporting of quality indicators, we focused our work on that key aspect of the initiative. CMS cited its Interim Report on Evaluation Activities for the Nursing Home Quality Initiative to support its conclusion that the initiative was successful in promoting quality improvement activities among nursing homes. The improvements cited in the Interim Report were self-reported by facilities and CMS offered no insights on the nature of the quality improvement changes. The Interim Report was not available when we sent our draft report to CMS for comment. CMS provided several technical comments which we incorporated as appropriate. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies to the Administrator of CMS, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions, please call me at (202) 512-7118 or Walter Ochinko at (202) 512-7157. GAO staff acknowledgments are listed in appendix IV. The following staff made important contributions to this report: Laura Sutton Elsberg, Patricia A. Jones, Dean Mohs, Dae Park, Jonathan Ratner, Peter Schmidt, Paul M. Thomas, and Phyllis Thorburn. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to GAO Mailing Lists” under “Order GAO Products” heading. | GAO was asked to review the Centers for Medicare & Medicaid Services (CMS) initiative to publicly report additional information on its "Nursing Home Compare" Web site intended to help consumers choose a nursing home. GAO examined CMS's development of the new nursing home quality indicators and efforts to verify the underlying data used to calculate them. GAO also reviewed the assistance CMS offered the public in interpreting and comparing indicators available in its six-state pilot program, launched in April 2002, and its own evaluation of the pilot. The new indicators are scheduled to be used nationally beginning in November 2002. CMS's initiative to augment existing public data on nursing home quality has considerable merit, but its planned November 2002 implementation does not allow sufficient time to ensure the indicators are appropriate and useful to consumers. CMS's plan urges consumers to consider nursing homes with positive quality indicator scores, in effect, attempting to use market forces to encourage nursing homes to improve the quality of care. However, CMS is moving forward without adequately resolving important open issues on the appropriateness of the indicators chosen for national reporting or the accuracy of the underlying data. To develop and help select the quality indicators, CMS hired two organizations with expertise in health care data--Abt Associates, Inc. and the National Quality Forum (NQF). Abt identified a list of potential quality indicators and tested them to verify that they represented the actual quality of care individual nursing homes provide. GAO's review of the available portions of the report raised serious questions about the basis for moving forward with national reporting at this time. NQF, which was created to develop and implement a national strategy for measuring health care quality, was hired to review Abt's work and identify core indicators for national reporting. To allow sufficient time to review Abt's validation report, NQF agreed to delay its recommendations for national reporting until 2003. CMS limited its own evaluation of its six-state pilot program for the initiative so that the November 2002 implementation date could be met. Early results were expected in October 2002, leaving little time to incorporate them into the national rollout. Despite the lack of a final report from NQF and an incomplete pilot evaluation. CMS has announced a set of indicators it will begin reporting nationally in November 2002. GAO has serious concerns about the potential for public confusion by the quality information published, especially if there are significant changes to the quality indicators due to the NQF's review. CMS's proposed reporting format implies a precision in the data that is lacking at this time. While acknowledging this problem, CMS said it prefers to wait until after the national rollout to modify the presentation of the data. GAO's analysis of data currently available from the pilot states demonstrated there was ample opportunity for the public to be confused, highlighting the need for clear descriptions of the data's limitations and easy access to impartial experts hired by CMS to operate consumer hotlines. CMS has not yet demonstrated its readiness to meet these consumer needs either directly or through the hotlines fielding public questions about confusing or conflicting quality data. CMS acknowledged that further work is needed to refine its initiative, but believes that its indicators are sufficiently valid, reliable, and accurate to move forward with national implementation in November 2002 as planned. |
While the overall goal of Title II under both HEA and NCLBA is to improve student achievement by improving the teacher workforce, some of the specific approaches differ. For example, a major focus of HEA provisions is on the training of prospective teachers (preservice training) while NCLBA provisions focus more on improving teacher quality in the classroom (in service training) and hiring highly qualified teachers. Also, both laws use reporting mechanisms to increase accountability. However, HEA focuses more on institutions of higher education while NCLBA focuses on schools and school districts. Additionally, HEA focuses on expanding the teacher workforce by supporting recruitment from other professions. In addition, HEA and NCLBA Title II funds are distributed differently. HEA teacher quality funds are disbursed through three distinct types of grants: state, partnership, and recruitment grants. State grants are available for states to implement activities to improve teacher quality in their states by enhancing teacher training efforts, while partnership grants support the collaborative efforts of teacher training programs and other eligible partners. Recruitment grants are available to states or partnerships for teacher recruitment activities. All three types of grants require a match from non-federal sources. For example, states receiving state grants must provide a matching amount in cash or in-kind support from non-federal sources equal to 50 percent of the amount of the federal grant. All three grants are one-time competitive grants; however, state and recruitment grants are for 3 years while partnership grants are for 5 years. HEA amendments in 1998 required that 45 percent of funds be distributed to state grants, 45 percent to partnership grants, and 10 percent to recruitment grants. As of April 2007, 52 of the 59 eligible entities (states, the District of Columbia, and 8 territories) had received state grants. Because the authorizing legislation specifically required that entities could only receive a state grant once, only seven would be eligible to receive future state grants. In our 2002 report, we suggested that if Congress decides to continue funding teacher quality grants in the upcoming reauthorization of HEA, it might want to clarify whether all 59 entities would be eligible for state grant funding under the reauthorization, or whether eligibility would be limited to only those states that have not previously received a state grant. We also suggested that if Congress decides to limit eligibility to entities that have not previously received a state grant, it may want to consider changing the 45 percent funding allocation for state grants. In a 2005 appropriation act, Congress waived the allocation requirement. In 2006, about 9 percent of funds were awarded for state grants, 59 percent for partnership grants, and 33 percent for recruitment. When Congress reauthorizes HEA, it may want to further clarify eligibility and allocation requirements for this program. NCLBA, funded at a much higher level than HEA, provides funds to states through annual formula grants. In 2006, Congress appropriated $2.89 billion through NCLBA and $59.9 million for HEA for teacher quality efforts. While federal funding for teacher initiatives was provided through two other programs prior to NCLBA, the act increased the level of funding to help states and districts implement the teacher qualification requirements. States and districts generally receive NCLBA Title II funds based on the amount they received in 2001, the percentage of children residing in the state or district, and the number of those children in low- income families. After reserving up to 1 percent of the funds for administrative purposes, states pass 95 percent of the remaining funds to the districts and retain the rest to support state-level teacher initiatives and to support NCLBA partnerships between higher education institutions and high-need districts that work to provide professional development to teachers. While there is no formula in NCLBA for how districts are to allocate funds to specific schools, the act requires states to ensure that districts target funds to those schools with the highest number of teachers who are not highly qualified, schools with the largest class sizes, or schools that have not met academic performance requirements for 2 or more consecutive years. In addition, districts applying for Title II funds from their states are required to conduct a districtwide needs assessment to identify their teacher quality needs. NCLBA also allows districts to transfer these funds to most other major NCLBA programs, such as those under Title I, to meet their educational priorities. HEA provides grantees and NCLBA provides states and districts with the flexibility to use funds for a broad range of activities to improve teacher quality, including many activities that are similar under both acts. HEA funds can be used, among other activities, to reform teacher certification requirements, professional development activities, and recruitment efforts. In addition, HEA partnership grantees must use their funds to implement reforms to hold teacher preparation programs accountable for the quality of teachers leaving the program. Similarly, acceptable uses of NCLBA funds include teacher certification activities, professional development in a variety of core academic subjects, recruitment, and retention initiatives. In addition, activities carried out under NCLBA partnership grants are required to coordinate with any activities funded by HEA. Table 1 compares activities under HEA and NCLBA. With the broad range of activities allowed under HEA and NCLBA, we found both similarities and differences in the activities undertaken. For example, districts chose to spend about one-half of their NCLBA Title II funds ($1.2 billion) in 2004-2005 on class-size reduction efforts, which is not an activity specified by HEA. We found that some districts focused their class-size reduction efforts on specific grades, depending on their needs. One district we visited focused its NCLBA-funded class-size reduction efforts on the eighth grade because the state already provided funding for reducing class size in other grades. However, while class-size reduction may contribute to teacher retention, it also increases the number of classrooms that need to be staffed and we found that some districts had shifted funds away from class-size reduction to initiatives to improve teachers’ subject matter knowledge and instructional skills. Similarly, Education’s data showed that the percent of NCLBA district funds spent on class-size reduction had decreased since 2002-2003, when 57 percent of funds were used for this purpose. HEA and NCLBA both funded professional development and recruitment efforts, although the specific activities varied somewhat. For example, mentoring was the most common professional development activity among the HEA grantees we visited. Of the 33 HEA grant sites we visited, 23 were providing mentoring activities for teachers. In addition, some grantees used their funds to establish a mentor training program to ensure that mentors had consistent guidance. One state used the grant to develop mentoring standards and to build the capacity of trainers to train teacher mentors within each district. Some districts used NCLBA Title II funds for mentoring activities as well. We also found that states and districts used NCLBA Title II funds to support other types of professional development activities. For example, two districts we visited spent their funds on math coaches who perform tasks such as working with teachers to develop lessons that reflected state academic standards and assisting them in using students’ test data to identify and address students’ academic needs. Additionally, states used a portion of NCLBA Title II funds they retained to support professional development for teachers in core academic subjects. In two states that we visited, officials reported that state initiatives specifically targeted teachers who had not met the subject matter competency requirements of NCLBA. These initiatives either offered teachers professional development in core academic subjects or reimbursed them for taking college courses in the subjects taught. Both HEA and NCLBA funds supported efforts to recruit teachers. Many HEA grantees we interviewed used their funds to fill teacher shortages in urban schools or to recruit new teachers from nontraditional sources— mid-career professionals, community college students, and middle- and high-school students. For example, one university recruited teacher candidates with undergraduate degrees to teach in a local school district with a critical need for teachers while they earn their masters in education. The program offered tuition assistance, and in some cases, the district paid a full teacher salary, with the stipulation that teachers continue teaching in the local school district for 3 years after completing the program. HEA initiatives also included efforts to recruit mid-career professionals by offering an accelerated teacher training program for prospective teachers already in the workforce. Some grantees also used their funds to recruit teacher candidates at community colleges. For example, one of the largest teacher training institutions in one state has partnered with six community colleges around the state to offer training that was not previously available. Finally, other grantees targeted middle and high school students. For example, one district used its grant to recruit interns from 14 high-school career academies that focused on training their students for careers as teachers. Districts we visited used NCLBA Title II funds to provide bonuses to attract successful administrators, advertise open teaching positions, and attend recruitment events to identify qualified candidates. In addition, one district also used funds to expand alternative certification programs, which allowed qualified candidates to teach while they worked to meet requirements for certification. Finally, some states used HEA funds to reform certification requirements for teachers. Reforming certification or licensing requirements was included as an allowable activity under both HEA and NCLBA to ensure that teachers have the necessary teaching skills and academic content knowledge in the subject areas. HEA grantees also reported using their funds to allow teacher training programs and colleges to collaborate with local school districts to reform the requirements for teacher candidates. For example, one grantee partnered with institutions of higher education and a partner school district to expose teacher candidates to urban schools by providing teacher preparation courses in public schools. Under both HEA and NCLBA, Education has provided assistance and guidance to recipients of these funds and is responsible for holding recipients accountable for the quality of their activities. In 1998, Education created a new office to administer HEA grants and provide assistance to grantees. While grantees told us that the technical assistance the office provided on application procedures was helpful, our previous work noted several areas in which Education could improve its assistance to HEA grantees, in part through better guidance. For example, we recommended that in order to effectively manage the grant program, Education further develop and maintain its system for regularly communicating program information, such as information on successful and unsuccessful practices. We noted that without knowledge of successful ways of enhancing the quality of teaching in the classroom, grantees might be wasting valuable resources by duplicating unsuccessful efforts. Since 2002, Education has made changes to improve communication with grantees and potential applicants. For example, the department presented workshops to potential applicants and updated and expanded its program Web site with information about program activities, grant abstracts, and other teacher quality resources. In addition, Education provided examples of projects undertaken to improve teacher quality and how some of these efforts indicate improved teacher quality in its 2005 annual report on teacher quality. Education also has provided assistance to states, districts and schools using NCLBA Title II funds. The department offers professional development workshops and related materials that teachers can access online through Education’s website. In addition, Education assisted states and districts by providing updated guidance. In our 2005 report, officials from most states and districts we visited who use Education’s Web site to access information on teacher programs or requirements told us that they were unaware of some of Education’s teacher resources or had difficulty accessing those resources. We recommended that Education explore ways to make the Web-based information on teacher qualification requirements more accessible to users of its Web site. Education immediately took steps in response to the recommendation and reorganized information on its website related to the teacher qualification requirements. In addition to providing assistance and guidance, Education is responsible for evaluating the efforts of HEA and NCLBA recipients and for overseeing program implementation. Under HEA, Education is required to annually report on the quality of teacher training programs and the qualifications of current teachers. In 2002, we found that the information collected for this requirement did not allow Education to accurately report on the quality of HEA’s teacher training programs and the qualifications of current teachers in each state. In order to improve the data that states are collecting from institutions that receive HEA teacher quality grants, and all those that enroll students who receive federal student financial assistance and train teachers, we recommended that Education should more clearly define key data terms so that states provide uniform information. Further, in 2004, the Office of Management and Budget (OMB) completed a Program Assessment Rating Tool (PART) assessment of this program and gave it a rating of “results not demonstrated,” due to a lack of performance information and program management deficiencies. Education officials told us that they had aligned HEA’s data collection system with NCLBA definitions of terms such as “highly qualified teacher.” However, based on the PART assessment, the Administration proposed eliminating funding for HEA teacher quality grants in its proposed budgets for fiscal years 2006-2008, and redirecting the funds to other programs. Congress has continued to fund this program in fiscal years 2006 and 2007. Education has responded to our recommendations and issues raised in the PART assessment related to evaluating grantee activities and providing more guidance to grantees on the types of information needed to determine effectiveness. When the Congress amended HEA in 1998 to provide grants to states and partnerships, it required that Education evaluate the activities funded by the grants. In 2005, Education established performance measures for two of the teacher quality enhancement programs—state grants and partnership grants—and required grantees to provide these data in their annual performance plans submitted to Education. The performance measure for state grants is the percentage of prospective teachers who pass subject matter tests, while the measure for partnership grants is the percentage of participants who complete the program and meet the definition of being “highly qualified.” In addition, in 2006, Education included information in letters to grantees on the types of information that it requires to assess the effectiveness of its teacher quality programs. For example, in its letters to state grantees, Education noted that when reporting on quantitative performance measures, grantees must show how their actual performance compared to the targets (e.g., benchmarks or goals) that were established in the approved grant application for each budget period. In addition, in May 2006, Education issued its final report on HEA’s partnership grants, focusing on the 25 grantees of the 1999 cohort. The goal of the study was to learn about the collaborative activities taking place in partnerships. It was designed to examine approaches for preparing new and veteran teachers and to assess the sustainability of project activities after the grant ends. Among its findings, Education reported that partnerships encouraged and supported collaboration between institutions of higher education and schools to address teacher preparation needs. Under NCLBA, Education holds districts and schools accountable for improvements in student academic achievement, and holds states accountable for reporting on the qualifications of teachers. NCLBA set the end of the 2005-2006 school year as the deadline for teachers of core academic subjects, such as math and science, to be highly qualified. Teachers meeting these requirements must (1) have at least a bachelor’s degree, (2) be certified to teach by their state, and (3) demonstrate subject matter competency in each core academic subject they teach. Education collects state data on the percent of classes taught by highly qualified teachers and conducts site visits in part to determine whether states appropriately implemented highly qualified teacher provisions. In state reviews conducted as part of its oversight of NCLBA, Education identified several areas of concern related to states’ implementation of teacher qualification requirements and provided states feedback. For example, some states did not include the percentage of core academic classes taught by teachers who are not highly qualified in their annual state report cards, as required. In addition, because some states inappropriately defined teachers as highly qualified, the data that these states reported to Education were inaccurate according to a department official. In many states, the requirements for teachers were not sufficient to demonstrate subject matter competency. Since subject matter competency is a key part of the definition of a highly qualified teacher, such states’ data on the extent to which teachers have met these requirements could be misleading. Education also found that a number of states were incorrectly defining districts as high-need, in order to make more districts eligible for partnerships with higher education institutions. According to Education, each of these states corrected their data and the department will continue to monitor states to ensure they are using the appropriate data. In addition to Education’s oversight efforts, OMB completed a PART assessment of NCLBA Title II in 2005 and rated the program as “moderately effective.” While OMB noted that the program is well- managed, it also noted that the program has not demonstrated cost- effectiveness and that an independent evaluation has not been completed to assess program effectiveness. In response to OMB’s assessment, Education took steps to more efficiently monitor states and conducted two program studies related to teacher quality. An Education official told us that the program studies had been conducted but the department has not yet released the findings. In conclusion, the nation’s public school teachers play a key role in educating 48 million students, the majority of our future workforce. Recognizing the importance of teachers in improving student performance, the federal government, through HEA and NCLBA, has committed significant resources and put in place a series of reforms aimed at improving the quality of teachers in the nation’s classrooms. With both acts up for reauthorization, an opportunity exists for the Congress to explore potential interrelationships in the goals and initiatives under each act. While HEA and NCLBA share the goal of improving teacher quality, it is not clear the extent to which they complement each other. Our separate studies of teacher quality programs under each of the laws have found common areas for improvement, such as data quality and assistance from Education. We have also found that states, districts, schools, and grantees under both laws engage in similar activities. However, not much is known about how well, if at all, these two laws are aligned. Thus, there may be opportunities to better understand how the two laws are working together at the federal, state, and local level. For example, exploring links between efforts aimed at improving teacher preparation at institutions of higher education and efforts to improve teacher quality at the school or district level could identify approaches to teacher preparation that help schools the most. Mr. Chairman, this concludes my prepared statement. I welcome any questions you or other Members of this Subcommittee may have at this time. For further information regarding this testimony, please contact me at 202- 512-7215. Individuals making key contributions to this testimony include Harriet Ganson, Bryon Gordon, Elizabeth Morrison, Cara Jackson, Rachel Valliere, Christopher Morehouse, and Jessica Botsford. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Teachers are the single largest resource in our nation's elementary and secondary education system. However, according to recent research, many teachers lack competency in the subjects they teach. In addition, research shows that most teacher training programs leave new teachers feeling unprepared for the classroom. While the hiring and training of teachers is primarily the responsibility of state and local governments and institutions of higher education, the federal investment in enhancing teacher quality is substantial and growing. In 1998, the Congress amended the Higher Education Act (HEA) to enhance the quality of teaching in the classroom and in 2001 the Congress passed the No Child Left Behind Act (NCLBA), which established federal requirements that all teachers of core academic subjects be highly qualified. This testimony focuses on (1) approaches used in teacher quality programs under HEA and NCLBA, (2) the allowable activities under these acts and how recipients are using the funds, and (3) how Education supports and evaluates these activities. This testimony is based on prior GAO reports. We updated information where appropriate. While the overall goal of Title II in both HEA and NCLBA is to improve teacher quality, some of their specific approaches differ. For example, a major focus of HEA provisions is on the training of prospective teachers while NCLBA provisions focus more on improving teacher quality in the classroom and hiring highly qualified teachers. Both laws use reporting mechanisms to increase accountability; however, HEA focuses more on institutions of higher education while NCLBA focuses on schools and districts. In addition, HEA and NCLBA grants are funded differently, with HEA funds distributed through one-time competitive grants, while Title II under NCLBA provides funds annually to all states through a formula. Both acts provide states, districts, or grantees with the flexibility to use funds for a broad range of activities to improve teacher quality, including many activities that are similar, such as professional development and recruitment. A difference is that NCLBA's Title II specifies that teachers can be hired to reduce class-size while HEA does not specifically mention class-size reduction. Districts chose to spend about one-half of their NCLBA Title II funds on class-size reduction in 2004-2005. On the other hand, professional development and recruitment efforts were the two broad areas where recipients used funds for similar activities, although the specific activities varied somewhat. Many HEA grantees we visited used their funds to fill teacher shortages in urban schools or recruit teachers from nontraditional sources, such as mid-career professionals. Districts we visited used NCLBA funds to provide bonuses, advertise open teaching positions, and attend recruitment events, among other activities. Under both HEA and NCLBA, Education has provided assistance and guidance to recipients of these funds and is responsible for holding recipients accountable for the quality of their activities. GAO's previous work identified areas where Education could improve its assistance on teacher quality efforts and more effectively measure the results of these activities. Education has made progress in addressing GAO's concerns by disseminating more information to recipients, particularly on teacher quality requirements, and improving how the department measures the results of teacher quality activities by establishing definitions and performance targets under HEA. While HEA and NCLBA share the goal of improving teacher quality, it is not clear the extent to which they complement each other. States, districts, schools, and grantees under both laws engage in similar activities. However, not much is known about how well, if at all, these two laws are aligned. Thus, there may be opportunities to better understand how the two laws are working together at the federal, state, and local level. |
The National Defense Authorization Act for fiscal year 2002 requires DOD to develop and maintain an inventory of defense sites known or suspected to contain unexploded ordnance, discarded military munitions, or munitions constituents and to annually update the inventory and list prioritizing these sites for cleanup. Figure 1 shows an example of unexploded ordnance found at a munitions response site on Beale Air Force Base in 2008. As of fiscal year 2008, DOD had identified 3,674 munitions response sites in the United States and its territories and outlying areas. Figure 2 shows the number of sites in each state and in United States territories and outlying areas. The majority of munitions response sites are located on active installations (46 percent) and FUDS (45 percent), with the remainder located on BRAC installations (9 percent). The Corps is responsible for cleanup at 45 percent (1,661) of the munitions response sites, the Army for 29 percent (1,080), the Air Force for 18 percent (644), and the Navy for 8 percent (289), as shown in figure 3. Each of the military services and the Corps have established their own individual organizational structures to implement the MMRP. These structures, which are similar to the structures of their respective IRPs, have various levels of management, but for ease of discussion, we have identified three broad levels of management. At the operational level, key responsibilities rest with project managers who directly oversee MMRP activities at Army, Air Force, and Navy active and BRAC installations and at FUDS. The project managers’ responsibilities may include planning munitions response actions, developing cleanup cost estimates, coordinating with stakeholders, and ensuring oversight of program activities, such as monitoring technical work conducted by the contractors who are responsible for various aspects of the cleanup process. Next, at the middle-management level, managers provide direct oversight of MMRP activities conducted at the operational level and also serve as liaisons between the operational level and the top leadership level of the organization. Managers at the middle-management level may be responsible for monitoring MMRP activities, such as reviewing cleanup plans developed at the operational level, determining operational level funding, and ensuring that their munitions response programs are in compliance with applicable laws and policies. Finally, managers at the leadership level of the organization may conduct program reviews to ensure MMRP activities implemented by the operational and middle- management levels are in compliance with applicable laws, regulations, and DOD policy and to approve funding requests for munitions response actions that have been recommended by the levels below them. 70 Fed. Reg. 58,016 (Oct. 5, 2005) (codified at 32 C.F.R. Pt. 179 (2010)). able to complete at least one of the protocol’s modules, it assigns the site a relative priority score of one through eight, with one representing the highest priority or greatest risk and eight the lowest priority or lowest risk. The military services and the Corps may not assign a relative priority score to some sites and instead assign one of the following alternative designations: Evaluation pending. Indicates that there are known or suspected hazards present but that sufficient information is not available to populate the data elements for at least one of the modules and the site requires further evaluation. No longer required. Indicates that the site no longer requires a priority score because DOD has conducted a response action and determined that no further action is required. No known or suspected hazard. Indicates that the site does not require an evaluation to determine a relative priority score because review of the site concluded that no hazards are present. According to DOD’s policy, the military services and the Corps will clean up munitions response sites with a higher relative priority score before a site with a lower score. However, the military services and the Corps also can consider other factors, such as military mission needs, land reuse plans, and stakeholder concerns, in determining which sites to clean up first. DOD refers to the process of deciding which sites to clean up first based on relative priority scores in combination with other factors as “sequencing” sites for cleanup. In deciding what actions, if any, are needed to clean up a site identified as potentially contaminated with military munitions, DOD officials told us that the military services and the Corps follow the process established under the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) of 1980. Pub. L. No. 96-510, as amended. A key law amending CERCLA was the Superfund Amendments and Reauthorization Act of 1986, Pub. L. No. 99-499 (1986), which provided that federal agencies “shall be subject to, and comply with, this Act in the same manner and to the same extent, both procedurally and substantively,” as any private party. Id., § 120. See 42 U.S.C. § 9620 (2010). For complete descriptions, see 40 C.F.R. §§ 300.420, 300.430, 300.435 (2010). After a site reaches response complete, the military services and the Corps may conduct long-term management at the site. For example, they may monitor environmental conditions, enforce land use controls, and maintain any remedies to ensure continued protection as designed. Long-term management occurs until no further environmental restoration response actions are appropriate or anticipated. According to a senior DOD official, DOD does not require the military services or the Corps to track the time they spend working on MMRP activities separately from the time they spend working on other environmental restoration program activities. As a result, we were unable to determine the staffing levels dedicated to the MMRP. According to officials from the Army, Air Force, Navy, and the Corps, their staff support both the IRP and the MMRP. However, these officials told us they do not separately track the time that staff spend working on each of the two programs because Congress does not appropriate funding for these programs separately and tracking staff time separately would add no value to accomplishing cleanup of these sites. Moreover, a senior Army official told us that the extent to which staff work on the IRP and MMRP varies greatly among employees from day to day, making it extremely difficult to quantify the time devoted to each program. DOD provides the military services and the Corps combined annual funding for all of their environmental restoration programs. It is the responsibility of the military services and the Corps to make decisions about how to prioritize that funding among their environmental programs, such as the IRP and MMRP. Between 2002 and 2008, the military services and the Corps directed most of their IRP and MMRP environmental restoration funds to their respective IRPs—a total of about $9.7 billion compared with the approximately $1.2 billion they directed to their respective MMRPs (see fig. 5). The term “obligations” refers to the amount of money the military services and the Corps legally committed for payment. DOD reported to Congress that it had achieved response complete at more than one-third of its munitions response sites by the end of fiscal year 2008. According to DOD, most of these sites did not require cleanup under the MMRP. For the small number of sites where the military services and the Corps have conducted cleanup activities under the MMRP, a variety of factors influenced the selection of these sites, including immediate danger to public safety and pressing military mission needs. However, for the majority of sites in the MMRP inventory, the military services and the Corps are still in the process of gathering information necessary to assess the sites’ relative risk levels in order to set cleanup priorities. In some cases, they have also begun to develop approaches to sequencing their respective sites for cleanup. DOD, Defense Environmental Programs Annual Report to Congress (Fiscal Year 2008), pp.16-17. be currently in use by the military); (2) they were merged with other sites and therefore ceased to exist as independent sites; (3) they never actually existed and were added to the inventory in error; or (4) the hazard was not of DOD origin and therefore not DOD’s responsibility to clean up. Since 2001, we have been concerned about the lack of clarity in DOD’s approach for reporting on response complete sites. That year we recommended that DOD exclude projects from its “completed” list that did not require actual cleanup and were closed solely as the result of an administrative action. Nevertheless, the department disagreed with our recommendation, and its environmental programs annual reports to Congress since 2001 have continued to report administratively closed sites as response complete with very limited explanation. Specifically, in its fiscal year 2008 annual report, DOD mentioned in a note to a figure that the response complete category included both sites it cleaned up and sites that did not require actual cleanup, which we have defined as administratively closed. A senior DOD official told us that DOD reports administratively closed sites and sites that were actually cleaned up as response complete because in both cases it has completed its response under the CERCLA process. Nonetheless, because DOD does not clearly and prominently explain in its reports that many of these sites were not actually cleaned up under the MMRP, we continue to believe that the information being provided to Congress and the public is misleading and overstates the level of progress made cleaning up sites under the MMRP. GAO, Environmental Contamination: Cleanup Actions at Formerly Used Defense Sites, GAO-01-557 (Washington, D.C.: July 31, 2001). Our analysis indicates that the military services and the Corps have conducted cleanup activities under the MMRP at 84 of the 1,318 sites it reported as response complete as of fiscal year 2008, as shown in figure 7. According to military service and Corps officials, these sites were selected for cleanup based on an assessment of relative risk and other factors. These other factors included imminent danger to public safety, pressing military mission needs, land reuse plans, and stakeholder concerns, for example: Imminent danger. According to a senior Army official, the Corps cleaned up the Dolly Sods North FUDS, located in the Monongahela National Forest in West Virginia, for imminent danger reasons. This site had been assigned a medium risk assessment code score. Hikers visiting the site— a wilderness area currently owned by the U.S. Forest Service and visited by approximately 60,000 people annually—reported finding military munitions on the ground. For example, in 1996, a piece of live ordnance was found about 300 feet from a visitor parking lot. As a result, the Corps took cleanup actions that involved removing ordnance from trail areas and campsites because it determined that these items presented an imminent danger to the public. The Corps completed the cleanup in 2000 and also implemented an explosives safety education program for visitors to the site, which is ongoing. Mission needs. According to Air Force officials, the Air Force selected the sole munitions response site at Little Rock Air Force Base in Arkansas for cleanup in fiscal year 2009 to meet mission needs, even though it received a low prioritization protocol score. The factors that drove the decision to clean up this site were that (1) the site is a possible location for a future Security Forces Regional Training Center, and (2) by cleaning up the only MMRP site on the base, it would release the entire base from the program and thus reduce related administrative costs. The Air Force estimates that site cleanup will be complete in fiscal year 2010. Land reuse plans. According to a senior Army official, the Army funded cleanup work done by a local redevelopment authority on a munitions response site at Fort Ord, a BRAC installation near Monterey, California, to meet land reuse plans, even though the Army assigned the site a medium risk assessment code score and has not scored it under the munitions response site prioritization protocol. The Army initiated cleanup at this site largely in response to the community’s request to implement a land reuse plan to construct a veterans’ cemetery. The central California coast region currently lacks burial space for the approximately 50,000 veterans residing in the area, some of whom served in World War II and now wish to be buried at Fort Ord. According to a senior Army official, as of January 2010, the redevelopment authority had completed cleanup, and the veterans’ cemetery can be developed as soon as funding is available. Stakeholder concerns. According to a senior Army official, the Corps decided to clean up the Torpedo and Bombing Range FUDS at Pyramid Lake northeast of Reno, Nevada, because of stakeholder concerns, even though the Corps assigned the site a low risk assessment code score. Consequently, during fiscal year 2007—the year the military services and the Corps began reporting prioritization protocol scores—and fiscal year 2008, the military services were only able to report relative priority scores to DOD for 432 sites, or 19 percent of the 2,333 munitions response sites that needed scoring. Specifically, the Air Force reported scores for 53 sites, or 13 percent of its 417 sites; the Army reported scores for 175 sites, or 29 percent of its 603 sites; and the Navy reported scores for 204 sites, or 89 percent of its 230 sites. The military services and the Corps assigned the remaining 1,901 sites the alternative rating “evaluation pending” as of the end of fiscal year 2008, indicating that they needed more information before they could calculate relative priority scores. This site would not have been scored using the Munitions Response Site Prioritization Protocol because cleanup was completed in 2006. not yet been finalized pending an internal review. The same official said that the Corps will report scores for about 600 sites to DOD by the end of fiscal year 2010 and will report scores for the remaining sites by fiscal year 2014. The percentage of sites with reported scores by military service and the Corps is shown in figure 8. After they have assigned prioritization protocol scores to all of their sites, each of the military services and the Corps are to determine which sites to sequence and allocate funding to first for the next phase of the cleanup process. DOD’s regulation establishing the Munitions Response Site Prioritization Protocol provides for subsequent sequencing to consider other factors and provides a nonexclusive list of example factors, such as mission needs and stakeholder input. The military services and the Corps are to use their installation-specific management action plans—plans that describe an integrated, coordinated approach for conducting all required environmental restoration activities, including schedules and cost estimates—as a vehicle for sequencing. The regulation does not, however, establish a methodology for how such other factors are to be considered. decisions. In the absence of guidance from DOD that establishes a consistent set of requirements, we found that the Air Force, Army, and the Corps have begun to independently develop their own approaches for sequencing, and the Navy has not yet determined whether it needs to develop such an approach. Specifically, we found the following: The Air Force has developed detailed, written guidance for incorporating factors other than risk into its site sequencing decisions. The guidance requires the use of a numerical scoring process that incorporates prioritization protocol scores, as well as legal, scheduling, and mission factors, to sequence its sites for cleanup. According to Air Force officials, the Air Force is applying this approach to a single pool of both IRP and MMRP sites, which they believe allows them to fund cleanups of the highest-priority sites first across both programs. In addition, a senior Air Force official told us that using the standardized process ensures fairness and transparency in site sequencing. According to a senior Army official, the Army is currently developing a sequencing policy that it hopes to release by May 2010, which will apply to sites managed by both the Army and the Corps. The policy will likely require program managers to document the reasons for their sequencing decisions to facilitate transparency and allow for more effective Army oversight. However, the official said that the Army does not plan to require a particular approach to sequencing and believes a quantitative approach similar to the Air Force’s approach could be too restrictive and not allow adequate flexibility for decision making. According to a senior Navy official, it is too early to determine whether the Navy needs to issue additional guidance beyond the framework that establishes the prioritization protocol and sequencing considerations currently provided in the DOD regulations. According to the official, although the Navy has initially prioritized many sites based on preliminary assessment data, it does not expect to begin fully sequencing sites until 2011, when it completes site inspections and applies the data gathered to generate relative priority scores. The Navy will wait to see if it encounters any difficulties before deciding on whether to develop additional guidance. According to a senior DOD official, the department plans to give the military services and the Corps the flexibility to make sequencing decisions as they see fit. This official said that the military services and the Corps have experience making sequencing decisions for the IRP, and DOD has not encountered any problems with these decisions. As a result, the official said DOD sees no need to provide guidance on how factors other than risk should be considered when making decisions about which sites to sequence first for cleanup. However, in the absence of such guidance, the military services and the Corps may not consistently (1) consider the same range of factors in making their decisions or (2) give the same relative significance to risk and other factors in making their cleanup sequencing decisions. As a result, we believe that this could impact the consistency and transparency of sequencing decisions. DOD has not yet implemented the statutory requirement contained in the fiscal year 2007 National Defense Authorization Act to establish a key performance goal for reaching remedy in place or response complete at munitions response sites on FUDS, although DOD has established the required performance goals for active and BRAC 2005 sites. After a final remedial action has been constructed and is operating as planned, DOD describes the site status as remedy in place. While operation of the remedy is ongoing but cleanup objectives have not yet been met, the site cannot be considered response complete. DOD categorizes sites as response complete at any point in the process when it determines no further response is appropriate, including sites without a remedy in place. According to DOD, such determinations are made in conjunction with regulators and stakeholders. complete performance goals. However, DOD has not yet determined whether such goals are feasible—the necessary initial step before reporting interim goals. A senior DOD official said that DOD will determine whether interim goals are feasible after the military services and the Corps have completed the site inspection phase for all munitions response sites, which they expect to do by the end of fiscal year 2010. The DOD official said that it is not practical for DOD to establish interim goals without first understanding the nature and extent of cleanup requirements at munitions response sites. However, DOD was able to establish its performance goals for reaching remedy in place or response complete for munitions response sites at active and BRAC 2005 installations, and we believe DOD should therefore be able to determine the feasibility of related interim goals. Furthermore, until DOD determines whether interim goals are feasible, and if so, reports them to Congress, DOD will not have addressed this requirement. Moreover, since DOD’s MMRP remedy in place or response complete performance goals are long-term—2017 for sites at BRAC 2005 installations, 2020 for sites at active installations, and possibly 2060 or later for FUDS—without this determination and reporting of interim goals, Congress may have limited information with which to measure progress of the MMRP over the next decade. DOD collects data on two of the many factors that can influence project duration at munitions response sites. We measured project duration, which was calculated using both month and year information, as the length of time between the earliest phase start date and the latest phase end date. For the purposes of our analysis, if the most recent phase was still in process, we used September 2008 as the end date because that was the latest date for which we had Knowledge-Based Corporate Reporting System data. The military services and the Corps report funds obligated for cleanup activities at munitions response sites in a fiscal year to DOD. Officials from the military services and the Corps told us that a number of other factors can influence project duration, but DOD’s database does not include information on these factors, which include the following: The need to achieve consensus with stakeholders, such as regulators or community members, can increase project duration. For example, failure to reach consensus with regulators increased project duration at the Jackson Park Naval Housing Complex, according to Navy officials. One area of disagreement between Navy officials and federal regulators was over the number of detected metal pieces that needed to be excavated during the remedial investigation phase. Federal regulators wanted the Navy to excavate a higher percentage of detected metal pieces than the Navy initially intended to excavate. After a lengthy process, the Navy and federal regulators were able to reach consensus on the percentage of metal pieces to excavate. Obtaining entry rights from current owners of FUDS properties takes time and can increase project duration. For example, a senior official from the Corps told us that a landowner at the Campbell Island, North Carolina, FUDS refused to grant the Corps access to the site because of dissatisfaction with the government. The site inspection phase was scheduled to start sometime after December 2008; however, as of February 2010, the Corps had not yet initiated the site inspection because the agency has not yet been able to obtain entry rights from the current landowner. Corps officials plan to contact the landowner sometime in 2010 in an effort to resolve the issue. Site-specific factors arise that can extend project duration in some cases. For example, Air Force officials told us that strict requirements from the New Hampshire State Historic Preservation Office delayed cleanup at New Boston Air Force Base. It took the Air Force longer to complete the investigative phases of the cleanup process because the Historic Preservation Office required that all objects discovered on the site, that were not unexploded ordnances or munitions constituents, be left in place to allow an archeologist to photograph and log each item for the historical record. We found that DOD lacks complete site-level data on obligated funds for the three phases of the cleanup process we examined—preliminary assessment, site inspection, and remedial investigation/feasibility study— for fiscal years 2001 through 2008. These are funds that DOD has legally committed to pay for activities conducted during a particular phase of the cleanup process. Assessing the extent to which DOD’s estimates of costs for MMRP cleanup phases are accurate requires both data on the estimated costs and funds obligated so they can be compared to determine how closely the estimates match the obligations. Our analysis of the 2,611 munitions response sites where work was conducted during the preliminary assessment phase in fiscal years 2001 through 2008 found that the database did not contain obligated funds data for 2,272 (or 87 percent) of the sites. According to a senior DOD official, the military services and the Corps often are unable to report funds obligated for preliminary assessments for individual sites because they sometimes conduct preliminary assessments for all sites on an installation at the same time. In these instances, obligated funds are reported for the entire installation as opposed to on a site-by-site basis. Moreover, according to this official, the preliminary assessment and site inspection phases are often conducted concurrently and obligated funds for these two phases are consolidated in the site inspection phase. However, our analysis of the 2,322 munitions response sites where work was conducted during the site inspection phase in fiscal years 2001 through 2008—including those sites that had a combined preliminary assessment and site inspection phase—found that the database did not have obligated funds data for 488 (or 21 percent) of these sites. Finally, our analysis of the 283 sites where work was conducted during the remedial investigation/feasibility study phase in fiscal years 2001 through 2008 found the database did not have obligated funds data for 116 (or 41 percent) of these sites. Figure 9 summarizes our analysis of the percentage of sites in these three phases of the cleanup process that did not have obligated funds data. A senior DOD official told us that in fiscal year 2009, DOD implemented additional, more rigorous quality assurance and control processes designed to detect errors and inconsistencies in its MMRP cost estimates. For example, the official said that one of the new data checks DOD began performing in 2009 was to examine sites scheduled to begin a cleanup phase in the future to ensure that the database also includes an estimate of the cost to complete that phase. However, the official said DOD is not currently evaluating whether the military services and the Corps are reporting obligated funds data for project phases that have been completed. DOD requires the military services and the Corps to gather obligated funds data and, according to the DOD official, they should be reporting these data to DOD for inclusion in the Knowledge-Based Corporate Reporting System. In the absence of complete site-level information on obligated funds, DOD or Congress may not be able to determine the accuracy of the military services’ and the Corps’ reported cost estimates for completing the various phases of the cleanup process. Furthermore, DOD or Congress ultimately may not have sufficient information to assess whether DOD’s estimates of its future cleanup liabilities under the MMRP are reliable. Thousands of munitions response sites that potentially pose risks to human health and the environment may need to be cleaned up before they can be reused, often for nonmilitary purposes. While we recognize that managing the MMRP is a large and complex task for DOD, the military services, and the Corps, we believe that in several areas there are opportunities for program management improvements. First, there is a need for guidance on how to conduct site sequencing in a manner that is consistent and transparent. While Congress mandated a consistent and transparent approach to assessing relative risks to assign cleanup priorities at sites, it did not provide for a process for assessing other factors, such as stakeholder concerns and military mission needs, when making site sequencing decisions; and DOD has not provided guidance to the military services and the Corps on how to conduct such assessments. Without DOD guidance on how to determine which sites to sequence first for cleanup, we are concerned that the military services and the Corps could use inconsistent processes for making these decisions. Second, we remain concerned about the transparency of DOD’s response complete information provided to Congress. DOD has categorized 1,234 sites as response complete, but these sites did not require actual cleanup under the MMRP, and we believe that this fact is not adequately explained in DOD’s annual report to Congress. As a result, Congress and the public may be misled about the extent to which actual cleanups have taken place under the MMRP to date. Third, despite a legal requirement to do so, DOD has not yet established the remedy in place or response complete goal for FUDS nor determined and reported any interim goals it finds feasible for the MMRP. Implementing these requirements would provide DOD, Congress, and the public better information to track progress toward cleaning up munitions response sites. Finally, the database that DOD uses to help manage its MMRP does not contain complete site-level data on obligated funds for the cleanup phases we examined. As a result, it is not possible to assess the accuracy of the cost estimates for activities conducted during these phases. As the MMRP matures and more sites begin actual cleanups, program costs will continue to increase and it will be critical for DOD to be able to determine whether its cost estimates for phases of the cleanup process are accurate, so that Congress and the public can have reasonable assurance that DOD’s estimates of its future cleanup liabilities under the MMRP are likely to be reliable. To improve transparency for progress DOD has made in cleaning up MMRP sites, Congress may wish to consider requiring that DOD report, in a separate category from its accounting of “response complete” sites in the Defense Environmental Programs Annual Report to Congress, any sites that DOD determined did not require actual cleanup under the MMRP and were administratively closed. To improve consistency, transparency, and management of the MMRP, we recommend that the Secretary of Defense take the following three actions: develop guidance for the military services and the Corps that establishes a consistent approach for how factors other than relative risk should be considered in munitions response site sequencing decisions; establish and report to Congress (1) a goal for achieving remedy in place or response complete for FUDS, as required by law, and (2) such interim goals as DOD determines feasible for the remedy in place or response complete goals at munitions response sites on active and BRAC 2005 installations and FUDS; and establish a process to ensure the completeness of site-level obligated funds data in DOD’s Knowledge-Based Corporate Reporting System database. We provided a copy of a draft of this report to the Department of Defense for its review and comment. DOD partially agreed with two of our recommendations and disagreed with one recommendation and the matter for congressional consideration. DOD said that it partially agreed with our first recommendation that the Secretary of Defense develop guidance for the military services and the Corps that establishes a consistent approach for how factors other than relative risk should be considered in munitions response site sequencing decisions. DOD said that it will collect and evaluate information and lessons learned from the military services regarding their processes for sequencing munitions response sites. If DOD determines that additional guidance is necessary, DOD said it will develop specific sequencing protocols and issue further guidance to ensure consistency across the military services. However, DOD did not specify what additional information it needs to collect from the military services and the Corps to determine that they currently are taking different approaches to sequencing their sites for cleanup. Nor did DOD explain in its comments the need for providing the military services and the Corps the flexibility to develop different approaches to sequencing munitions response sites. Given that this flexibility could result in inconsistent processes for making sequencing decisions, we continue to believe that DOD needs to provide guidance to the military services and the Corps that establishes a consistent approach to sequencing. This guidance will ensure that the military services and the Corps not only use the Munitions Response Site Prioritization Protocol to assign site priorities in a consistent and transparent fashion, but also ensure that they consider the same range of other factors, in addition to relative risk, in making their decisions and assess the significance of those factors in a consistent way. DOD also partially concurred with our second recommendation, that DOD establish a goal of remedy in place or response complete for FUDS, as required by law, and interim goals at munitions response sites on active and BRAC 2005 installations and FUDS. DOD said that it did not concur with what it understood to be a separate part of the recommendation—to set a date for “completing cleanup” of FUDS. However, we did not intend to convey a further requirement beyond the remedy in place or response complete goal for FUDS, and we clarified the recommendation accordingly. DOD said that it will establish a remedy in place or response complete goal for munitions response sites at FUDS and will establish additional short-term interim goals for active and BRAC 2005 installations and FUDS once it has a better understanding of the nature and extent of cleanup requirements at these sites. However, DOD has not committed to a date by which it will establish these goals. We believe it is important for DOD to set these goals as soon as possible because, until it does so, Congress and the public will have less information with which to monitor the progress of cleanups at munitions response sites. DOD did not agree with our third recommendation to establish a process to ensure the completeness of site-level obligated funds data in its Knowledge-Based Corporate Reporting System database. DOD stated that it has procedures in place to plan, program, budget, and execute funds for cleanup actions at munitions response sites. DOD also said that it has information on obligated funds but that it is not typically available at the individual site level and is tracked outside of the Knowledge-Based Corporate Reporting System database. Although we recognize that DOD has these phase-level data in another database, we continue to believe that without site-level obligations data, DOD does not have the ability to compare the corresponding cost estimates to determine if they are accurate. In the absence of such a comparison, DOD or Congress may not be able to determine the accuracy of the military services’ and the Corps’ estimates of the costs to complete various phases of the cleanup process. Finally, DOD did not agree with our matter for congressional consideration that would require DOD to report in a separate category from its “response complete” sites in the Defense Environmental Programs Annual Report to Congress any sites that DOD determined did not require actual cleanup under the MMRP and were administratively closed. DOD said that it believes that all sites that complete the CERCLA process should be considered equal accomplishments whether they require a removal or remedial action or not. DOD also said that it believes it is misleading to characterize a site that achieves closure without an actual cleanup differently from one that has been cleaned up, and that this undermines the significant work and progress DOD has made. We recognize that DOD must conduct assessments and investigations to determine that no physical cleanup actions will be needed and that this process can require significant time and effort to complete. Nonetheless, we believe it is misleading to group administratively closed and actually cleaned up sites together because the actions DOD took to close those two types of sites are significantly different. Also, we do not believe that listing these sites in separate categories undermines the progress DOD has made. Rather, doing so will improve transparency and more clearly indicate the nature of the actions that DOD has taken to reach response complete for its munitions response sites. Consequently, we continue to believe that Congress may wish to consider requiring DOD to report sites that were administratively closed in a separate category from those sites requiring actual, physical cleanup. DOD also provided technical comments in an enclosure to its letter, which we have incorporated in this report as appropriate. DOD’s letter is included in appendix II. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. The National D we assess the (1) Military Mu and funding levels; (2) progress the Department of Defense (DOD) has made cleaning up munitions response sites; (3) extent to which DOD has established performance goals for the MMRP; and (4) extent to which DOD collects data on factors influencing project duration, as well as the accuracy of its cleanup cost estimates. efense Authorization Act for fiscal year 2009 mandates that nitions Response Program’s (MMRP) staffing In addressing these four objectives, we analyzed MMRP data for fiscal years 2001 through 2008 in DOD’s environmental programs management database—the Knowledge-Based Corporate Reporting System—and the Defense Environmental Programs Annual Reports to Congress for fiscal years 2002 through 2008. We assessed the reliability of the data for relevant variables in the Knowledge-Based Corporate Reporting System by electronically testing for obvious errors in accuracy and completeness. We also reviewed information about data verification, reporting, and security, and the systems that produced the data, and interviewed officials knowledgeable about the data. When we found inconsistencies in the data, we worked with the officials responsible for the data to clarify these inconsistencies before conducting our analyses. We determined that the data were sufficiently reliable for the purposes of providing descriptive information about the MMRP and for analyzing the duration of phases of the MMRP cleanup process. However, we found MMRP obligated funds data to be incomplete and therefore not suitable for analysis. We discuss this data reliability issue in more detail later in this appendix. In addition, we reviewed key laws, regulations, policies, and guidance from DOD, the military services (Army, Air Force, and Navy), and the U.S. Army Corps of Engineers (Corps). We visited one base realignment and closure (BRAC) installation (Fort Ord), one active installation (Beale Air Force Base), and one formerly used defense site (FUDS) (Camp Beale) to ensure we had the opportunity to review MMRP operations at active and BRAC installations and FUDS. We also interviewed headquarters and regional officials from the Environmental Protection Agency to discuss the MMRP. To assess the military services’ and the Corps’ MMRP staffing and funding levels, we spoke with senior officials from the Office of the Deputy Under Secretary of Defense (Installations and Environment), the military services, and the Corps who are knowledgeable about how MMRP staffing and funding levels are determined. In addition, we reviewed the Defense Environmental Programs Annual Reports to Congress for fiscal years 2002 through 2008 to determine funding obligated for the MMRP. se To assess the progress DOD has made in cleaning up munitions respon sites, we identified, as of the end of fiscal year 2008, how many sites DODhad administratively closed and how ma ny had been actually cleaned up. We defined a site as administratively closed if after investigating, DOD determined that it could safely close the site without taking remedial action. Specifically, we analyzed data in the Knowledge-Based Corporate Reporting System to identify sites that fit two criteria: (1) the “response complete” date matched the end date for the three investigative phases during which no remediation actions are taken (preliminary assessm site inspection, and remedial investigation) and (2) no costs were reported in the remedial action construction or the remedial action operations phase. Senior officials from DOD, the military services, and the Corps agreed that these criteria would identify sites that had been closed actual cleanup, which we have defined as being administratively closed. These criteria allowed us to identify 712 of the 1,318 sites DOD report having achieved response complete. However, we were unable to determine if any of the remaining 606 sites had been administratively closed because sites may have been administratively closed without the response complete date matching the end date of one of the investigative phases. Therefore, we asked the military services and the Corps to identify which sites they had administratively closed. The Air Force and the Navy were able to provide the information for their relatively small number of sites, but senior Army and Corps officials said they did not keep such information in a centralized database and it would take them too much time to gather it for their many sites. Instead, they provided us with the number of sites they had actually cleaned up and indicated that we could assume the remaining sites had been administratively closed. In addition, we assessed the progress the military services and the Corps have made i applying the Munitions Response Site Prioritization Protocol to generate relative priority scores for their sites by reviewing prioritization protocol data in the Knowledge-Based Corporate Reporting System. We considered a site to be scored if it was listed in the Knowledge-Based Corporate Reporting System as having a numerical relative priority score of on e through eight or if it had been given the alternative designation of “no known or suspected hazard” as of the end of fiscal year 2008. We considered sites to not be scored if they had a designation of “evaluation pending” because this designation indicates that the military services or the Corps need more information to assign the site a relative priority score. We excluded from our analysis the 1,341 sites for which the military services and the Corps indicated that scoring was no longer required because DOD reported that most of these sites had already reached response complete. To assess the extent to which DOD has established performance goals fo the MMRP, we reviewed the fiscal year 2007 National Defense Authorization Act, the Military Munitions Response Program Comprehensive Plan, and the fiscal year 2008 Defense Environmental Programs Annual Report to Congress. We also spoke with a senior off responsible for the MMRP from the Office of the Deputy Under Secreta of Defense (Installations and Environment) to determine the progress DOD has made in establishing performance goals. To assess the extent to which DOD collects data on factors influen project duration, we reviewed and analyzed data from the Knowledge- Based Corporate Reporting System to determine the average length of time munitions response sites have been in the cleanup process. To determine project duration, we attempted to identify start and end dates for phases of the cleanup process for all 3,674 sites in the Knowledge- Based Corporate Reporting System. We measured project duration as the length of time between the earliest phase start date and the latest phas end date, calculated using both month and year information. Using this method, we were able to calculate project duration for 3,112 sites. We were unable to calculate project duration for 47 sites because they had no phase dates in the Knowledge-Based Corporate Reporting System. We did not calculate project duration for the remaining 515 sites because they had phase start and end dates prior to fiscal year 2001 (when the MMRP was established) and were therefore outside the scope of this review. Next, weanalyzed site size and type to assess their relationship to project duration. To analyze site size, we divided the list of sites into three similarly sized categories: (1) small (less than 23 acres); (2) medium (between 23 and 649 acres); (3) large (650 acres or larger). We also created a fourth categor for sites reported as zero acres or those with missing size data. On assigned sites to a category, we were able to combine this analysis with our analysis on project duration to calculate the mean and median projec duration for small, medium, and large sites. We reported the mean pro duration in the report, and there was no substantive difference between the mean and median. We used the site-type data in the Knowledge-Ba Corporate Reporting System to determine the relationship between project duration and to categories that included at least 5 percent of the total number of sites and then combined the remaining categories into an “other” category. This allowed us to analyze project duration for six site-type categori nges, (1) unexploded munitions and ordnance areas, (2) small arms ra (3) firing ranges, (4) explosive ordnance disposal areas, (5) other, and (6) unknown (i.e., information on site type was not available). Once we had determined these categories, we combined this analysis with our project duration analysis to calculate the mean and median project duration for each site type. We reported the mean p report, and there was no substantive difference between the mean and median. We also interviewed senior officials from the military services a the Corps to obtain their views on factors influencing project duration. type of hazard. We limited our analysis of site types To assess the accuracy of DOD’s cleanup cost estimates, we assessed reliability of data on obligated funds in the Knowledge-Based Corporate Reporting System for fiscal years 2001 through 2008. We analyzed the d to determine the extent to which sites with reported activities in three phases of the cleanup process also included data on funds obligated for those activities. We restricted our analysis to the first three phases of the l cleanup process—preliminary assessment, site inspection, and remedia investigation/feasibility study—because most munitions response sites are in one of these phases. To determine if we had a sufficient number of sites to conduct our analysis, we calculated the number of sites in each of the t three phases that had obligated funds data. We found that over 10 percen of sites for all three phases were missing obligated funds data. Therefore, we concluded that the data were not sufficiently reliable to allow us to compare obligated funds to cost estimates for the sites in all three phases to determine the accuracy of the estimates. We conducted this performance audit from January 2009 to April 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficien appropriate evidence to provide a reasonable basis for our findings and ce conclusions based on our audit objectives. We believe that the eviden t, obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, Stephen D. Secrist, Assistant Director; Leo G. Acosta; Elizabeth Beardsley; Mark Braza; Nancy Crothers; Pamela Davidson; Janida Grima; Amanda Leissoo; Laina Poon; and Kim Raheb made significant contributions to this report. | The Department of Defense (DOD) established the military munitions response program (MMRP) in 2001 to clean up sites known to be or suspected of being contaminated with military munitions and related hazardous substances. Cleanup of sites on active and base realignment and closure installations is the responsibility of the military service--Air Force, Army, Navy, or Marine Corps--that currently controls the land, and the Army has delegated execution of cleanup of formerly used defense sites (FUDS) to the U.S. Army Corps of Engineers (Corps). GAO was mandated to assess the (1) MMRP staffing and funding levels; (2) progress DOD has made in cleaning up munitions response sites; (3) extent to which DOD has established MMRP performance goals; and (4) extent to which DOD collects data on factors influencing project duration, as well as the accuracy of its cleanup cost estimates. GAO analyzed MMRP data and DOD documents and interviewed officials from DOD, the military services, and the Corps. The military services and the Corps do not track the time that staff work on MMRP activities separately from the time they spend on another environmental restoration program--the Installation Restoration Program (IRP). Consequently, it is not possible to determine the staffing levels for the MMRP. In addition, obligated funds for the MMRP increased from $95 million in fiscal year 2002 to approximately $284 million in fiscal year 2008, and the military services and the Corps directed 11 percent of their total MMRP and IRP environmental restoration funds to the MMRP during the period--a total of about $1.2 billion to the MMRP compared with $9.7 billion to the IRP. DOD reported to Congress that it had completed its cleanup response for 1,318 of its 3,674 sites by the end of fiscal year 2008; however, for 1,234 of these sites, DOD's response was an investigation that determined cleanup was not necessary. The remaining 84 sites were cleaned up because of such factors as imminent danger to public safety and pressing military mission and land reuse needs. In addition, the military services and the Corps are still in the process of gathering information necessary to prioritize most sites in the MMRP inventory for cleanup. When this process is complete, the military services and the Corps will consider this information along with other factors, such as land reuse plans, to determine which sites to clean up first. However, DOD has not issued guidance on how factors other than risk should be considered when making decisions about which sites to sequence first for cleanup, and the Air Force, the Army, and the Corps have begun to independently develop their own approaches. Using varying approaches could lead to inconsistent sequencing decisions. DOD has not yet established a performance goal for implementing the cleanup remedy (referred to as "remedy in place") or achieving the cleanup objective (referred to as "response complete") at munitions response sites located on FUDS, as required by the fiscal year 2007 National Defense Authorization Act. The act also directs DOD to report on interim goals it determines feasible for achieving the performance goals, but DOD has not yet done so. Performance goals are important because they are used to track progress toward cleaning up munitions response sites. By establishing goals, DOD would have better information with which to measure MMRP progress. DOD gathers data on two of the factors--site size and type of hazard--that can influence project duration at military munitions response sites. As would be expected, these data indicate that the larger the munitions response site and the more complex the type of hazard, the longer it takes to clean up the site. In addition, because data on funds obligated to complete specific phases of the cleanup process are not included in DOD's database for many munitions response sites, it is not possible to assess the accuracy of the military services' and the Corps' cost estimates for the MMRP. |
The Marine Plastic Pollution Research and Control Act of 1987 incorporates the provisions of MARPOL V that make it illegal for U.S. or foreign ships to discharge any plastics, including synthetic ropes, fishing nets, and plastic bags, into the ocean and other navigable waters. In 1989, the Coast Guard first promulgated regulations to enforce the MARPOL V provisions. These regulations specify that other forms of garbage, such as food waste and packing materials, may not be discharged within prescribed limits of U.S. shorelines. For U.S.-licensed boats above a certain size, the regulations require operators to post garbage discharge warning signs, maintain an approved waste management plan, and keep records of garbage disposal and discharges. For all vessels, U.S. and foreign, the regulations also set out the MARPOL V provisions against the illegal discharge of plastic and garbage, as well as the Coast Guard’s inspection procedures and possible penalties for infractions. The Coast Guard enforces compliance with MARPOL V mainly through its regional network of 47 marine safety offices. Enforcement personnel regularly inspect foreign and U.S.-licensed vessels for compliance with various safety and pollution regulations, including MARPOL V. If enforcement personnel find a violation of MARPOL V during their inspections, they document their findings and open an enforcement case on the Coast Guard’s computer system, the marine safety information system. The case is then forwarded to the district office, which reviews it for completeness and the sufficiency of evidence. If the district office finds the evidence to be sufficient, it forwards the case to one of three Coast Guard hearing offices for a civil penalty determination. In addition to the Coast Guard’s inspections, inspectors from the U.S. Department of Agriculture’s Animal and Plant Health Inspection Service (APHIS) also inspect commercial ships. APHIS inspects a majority of ships arriving from foreign ports, typically within 24 hours of their arrival, for compliance with U.S. plant and animal health laws. While APHIS is not required by law to enforce MARPOL V, it agreed in 1990 to help the Coast Guard do so. If APHIS suspects that a violation of MARPOL V has occurred, APHIS inspectors are supposed to notify the local Coast Guard marine safety office. The Coast Guard intensified its enforcement of MARPOL V following congressional criticism in 1990 and 1992 of the Coast Guard’s lack of progress in implementing MARPOL V. More aggressive enforcement, according to Coast Guard officials, has resulted in a steady increase in the number of MARPOL V violations found by enforcement personnel. Even so, the Coast Guard’s enforcement efforts have been affected by factors that affect the ability of its enforcement personnel to identify violations and adequately support their findings so that violators are penalized. Number of MARPOL V Enforcement Cases Has Increased The Coast Guard has identified an increasing number of violations of the MARPOL V regulations in recent years. In the first few years of the program, unless a violation was egregious, Coast Guard officials in the field said they often allowed violators to correct any problems and did not take enforcement actions. Since 1992, however, following increased congressional attention and aided by additional resources, the Coast Guard has strengthened the MARPOL V regulations and emphasized the need for personnel to be more aggressive in their enforcement. Accordingly, the number of enforcement cases involving violations of MARPOL V has increased from 16 in 1989, the first year of implementation, to 311 during 1994 (see fig. 1 and app. I for additional analysis of these violations). An increasing number of marine safety offices have initiated MARPOL V enforcement cases, indicating that the Coast Guard’s enforcement efforts are becoming more widespread. During 1991, two marine safety offices (New York and Corpus Christi) accounted for more than half of all enforcement cases, and only 20 of the 47 marine safety offices had initiated any cases. By 1994, the distribution of cases had become less concentrated—the five marine safety offices with the most cases accounted for more than half of all enforcement cases. Also, 33 marine safety offices had initiated at least one MARPOL V enforcement case. Of the 725 MARPOL V cases reported as of February 15, 1995, 69, or just under 10 percent, have resulted in the assessment of a penalty against the responsible party. The penalties ranged from a few hundred dollars to $50,000 and averaged almost $6,200 per case. However, 303, or 42 percent, of the cases submitted are still in process, including two-thirds of all the cases initiated in 1994. It is reasonable to assume that some percentage of the 303 cases still in process will also result in a penalty assessment and, therefore, that the percentage of cases which result in a penalty for this time period will likely increase over time. However, whether the Coast Guard’s success in obtaining penalties is improving over time is not yet known because so many 1993 and 1994 cases are still being processed. The remaining enforcement cases—those not in process or not having resulted in a civil penalty—have been administratively settled by the Coast Guard. These actions include case closure or dismissal, the issuance of a warning letter, or a referral to the country where the ship is registered—a procedure known as flag state referral—for consideration of possible fines or other actions by another country. Several factors impede the Coast Guard’s efforts to identify violations and, once they are identified, efficiently and effectively process enforcement cases for a civil penalty determination. Among these factors are (1) the inherent difficulty of enforcing MARPOL V, (2) the absence of a standardized MARPOL V inspection checklist, (3) diminished cooperation between the Coast Guard and APHIS, (4) inadequate feedback on case development, and (5) a burdensome and ineffective management information system. To cite a vessel for illegally discharging garbage or plastics, someone must see the event and report it, or the Coast Guard must develop strong evidence that such a discharge occurred. It is rare that Coast Guard personnel or others actually witness a vessel illegally disposing of plastics or other garbage. A Coast Guard hearing officer told us that unlike oil or hazardous waste discharges, discharges of plastics and other garbage usually do not leave a trail of evidence that can be traced to the offending party. According to Coast Guard officials in the field, should vessel operators knowingly choose to violate the MARPOL V discharge regulations, it is unlikely that they will be caught. Complicating enforcement efforts is the fact that one key component of the U.S. MARPOL V regulations—a requirement that vessels maintain garbage discharge records—does not apply to foreign-licensed ships.This difference is significant because a garbage discharge record is one of the key items that the Coast Guard uses as evidence in enforcement cases to prove that an illegal discharge has occurred. Equally significant, the Coast Guard can enforce MARPOL V for foreign vessels only within the U.S. Exclusive Economic Zone. Demonstrating that a vessel discharged garbage at sea is difficult; proving that it occurred within U.S. jurisdiction is even more difficult. Because eyewitness accounts of illegal discharges are infrequent, the Coast Guard often must develop circumstantial evidence that would lead to a prima facie determination that a discharge violation had occurred. Proving that a violation has occurred on the basis of circumstantial evidence is not easy and requires Coast Guard personnel to conduct a thorough and methodical investigation while on board a vessel. This includes gathering statements, checking the ship’s food storage and garbage disposal areas, taking photographs, and examining logbooks and other records. To help personnel enforce a wide variety of safety and pollution regulations, the Coast Guard relies extensively on standardized inspection checklists. The Coast Guard recognizes the importance of these checklists in helping to identify violations and develop sufficient evidence to support an enforcement action. However, according to the Chief of the Marine Environmental Protection Division, a standardized checklist covering the MARPOL V portion of vessel inspections has not been developed because of competing priorities. In our visits to marine inspection offices, we found a variety of boarding checklists. For the inspections of foreign vessels, each office had devised its own checklist, which ranged from lists with a single reference to MARPOL V to lists containing several pages of questions and guidance. For inspections of U.S. vessels and foreign passenger ships, the Coast Guard provides standardized inspection booklets to its inspectors. However, because these booklets predate MARPOL V, they do not include any reference to MARPOL V. In some offices, these booklets have been updated to remind inspectors to check on compliance with MARPOL V, but updates have not been standardized among marine safety offices. The absence of a standard inspection checklist for MARPOL V can hinder the ability of enforcement personnel to identify violations and then develop sufficient evidence to support an enforcement action. During one inspection we witnessed, for example, the port safety officer used a checklist with only a single reference to MARPOL V and did not identify violations that his superior later acknowledged should have been cited. In other marine safety offices, where more extensive checklists were used, we saw more thorough examinations, involving extensive inspections of food storage, food preparation, and garbage disposal areas and a detailed questioning of the crew on garbage disposal practices. Coast Guard officials responsible for the MARPOL V program agreed that the inspection checklist should be standardized and, on the basis of our findings, told us that they will initiate steps to develop one. The extent of cooperation between the Coast Guard and APHIS has varied in some locations. During the first years of MARPOL V enforcement, APHIS was an important source for identifying MARPOL V violations. Now, however, cutbacks in APHIS’ funding and uncertainties about the extent of APHIS’ role in MARPOL V inspections have diminished cooperation between the agencies in some locations. On three separate occasions beginning in 1990, APHIS headquarters has directed its field units to cooperate with the Coast Guard in identifying MARPOL V violations. APHIS headquarters provided criteria for its field units to use as a basis for forwarding copies of their inspection reports to the appropriate Coast Guard marine safety office for possible action. During our visits to marine safety offices, we found instances of substantial cooperation between the two agencies that had resulted in numerous enforcement cases in recent years. For example, in one marine safety office, the two agencies had conducted joint training: APHIS had instructed the Coast Guard on how to identify Asian Gypsy Moths, and the Coast Guard had provided MARPOL V training to APHIS. In other instances, we found that the two agencies had little or limited contact. For example, in one West Coast marine safety office, officials said that they tried for several years to develop a relationship—for example, offering MARPOL V training—with their local APHIS counterparts but were unsuccessful. A senior APHIS official said that some Coast Guard units have asked APHIS personnel to participate in joint boardings and safety inspections, which are beyond what APHIS has agreed to do. In another location on the East Coast, we found that a local APHIS office was mailing its inspection forms with suspected MARPOL V violations to the Coast Guard weeks after the ships had left port. At our suggestion, APHIS began faxing copies of inspection reports to the local Coast Guard units on the same day; this practice will allow the Coast Guard to inspect vessels suspected of violating MARPOL V before the vessels leave port. Coast Guard enforcement personnel told us that they believed that the “personalities” of the local officials involved, coupled with the fact that APHIS has no formal or regulatory responsibility to enforce MARPOL V, are key factors that have contributed to the poor cooperation in some locations. Efforts to formalize the nature and extent of cooperation between the two agencies have thus far been unsuccessful. For example, beginning in 1993 the Coast Guard sought to develop a memorandum of understanding with APHIS on this issue; however, agreement between the two agencies has still not been achieved. A senior APHIS official told us that he believes that a formal agreement is too “bureaucratic” and is not necessary in this instance. Even without an agreement, the Chief of the Coast Guard’s Marine Environmental Protection Division said that the Coast Guard will seek to identify practices found in locations where a productive relationship exists and apply them to those locations where cooperation has been more limited. Clear feedback from Coast Guard hearing officers can provide important information for enforcement personnel in the field on how to develop sound civil penalty cases that are technically correct and include complete evidence. If their cases are well prepared, enforcement personnel can better ensure that cases are not dismissed for technicalities, that proper civil penalties are assessed, and that enforcement time is not wasted. To improve the general quality of cases forwarded for civil penalty proceedings, the Coast Guard’s guidance requires hearing officers to notify district managers about the final action taken in each case. This notification should include the rationale the hearing officer used in reaching the decision, according to the hearing officers’ program manager. However, district offices are not required to forward the hearing officers’ feedback to local units. Enforcement personnel with whom we talked expressed frustration and confusion about why many MARPOL V cases are dismissed or why civil penalties are reduced. Twenty-two percent of all enforcement cases for the period from October 1, 1991, to December 31, 1994, were closed or dismissed by the Coast Guard without any enforcement action. Also, hearing offices’ data indicate that for fiscal years 1992-94, the final penalty assessed by the hearing office averaged less than half the average amount recommended to the hearing office. Enforcement personnel commented that often they receive untimely and/or insufficient feedback or rationale from the hearing officers or district program managers; therefore, the enforcement personnel learn little from the cases that can be applied to improve future submissions. For example, enforcement personnel in several marine safety offices said that it often takes months for their district to pass on to local units information from hearing officers, reducing the ability of unit personnel to learn from the feedback. At another marine safety office, we were told that the unit did not use case file information as a source of feedback because it was so old by the time it was returned. Also, hearing officers do not always provide a rationale for their decisions, according to hearing officers in two different offices. In cases in which good feedback has been provided, better case preparation has occurred. For example, a hearing officer told us about one marine safety office that had greatly improved the quality of its cases and the corresponding success rate for adjudicating MARPOL V violations. Enforcement personnel at this office told us that the key to the improvement in the quality of its cases stemmed from following the feedback that the office had received from its earlier cases and from the subsequent training that its enforcement personnel received. The hearing officers’ program manager acknowledged that sometimes cases are dismissed or civil penalties reduced because of incomplete case development and technicalities. She indicated that there is a need for good feedback and better case preparation guidance in general; however, because of other higher priorities, no such guidance has been prepared. Hearing officers with whom we spoke are reluctant to provide feedback on specific cases to the districts or units because of the importance of maintaining their neutrality as an adjudicator and avoiding the appearance of assisting in the prosecution of a particular case. They were amenable, however, to providing more general guidance or training on good case preparation techniques. In fact, some hearing officers said they occasionally visit districts to educate enforcement personnel on this subject, although the frequency of such visits varies. The Coast Guard’s marine safety information system, the system used to collect and analyze the MARPOL V enforcement data, was frequently cited by Coast Guard officials as burdensome and ineffectual. We previously reported on problems with this system, such as hardware and software problems, untimely and inaccurate information, and user “unfriendliness.” The effect of these problems on enforcing MARPOL is threefold. First, time spent trying to input data (as much as 10 hours for each violation) takes time away from inspecting ships. Second, program managers do not have access to data when they need them in order to monitor or evaluate the performance of marine safety offices. Collecting needed data by other means can be a laborious process, resulting in the ineffective use of staff at the unit level. For example, from March 1992 until October 1994, the Coast Guard—in an effort to collect accurate data and provide feedback to the field—tasked each marine safety office to manually collect enforcement data separately from the system. These data were reported monthly to program managers in Coast Guard headquarters, who otherwise would have had to wait 4 to 6 months for the system to report the same information. Third, the system does not include complete data on enforcement cases generated by Coast Guard personnel outside of marine safety offices, such as those cases recorded by small boat station personnel. In our view, this situation makes coordination among various Coast Guard units more difficult to achieve in enforcing MARPOL V. The Coast Guard is now completing some improvements to the system that it hopes will overcome the obstacles discussed above. According to the Chief of the Coast Guard’s Marine Environmental Protection Division, improving the MARPOL component of the data system has been a high priority in the Coast Guard and is expected to reduce input time and speed data collection for MARPOL violations. In addition, a contract was recently let to begin the development of the Coast Guard’s next management information system, according to Coast Guard headquarters officials. However, this new system will not be in operation for at least 2 years. For fiscal year 1991, the Senate Committee on Appropriations provided for 100 positions for pollution prevention activities. The Coast Guard designated 85 of these positions as “MARPOL investigator” or “coastal pollution enforcement” positions and allocated the remaining 15 positions as a support and training allowance. Coast Guard documents indicate that all but one of these positions were filled during 1991 and 1992. According to Coast Guard officials in headquarters and the field, MARPOL enforcement efforts are better spread among a number of personnel in each marine safety office rather than limited to the 85 designated enforcement personnel. As a result, the designated MARPOL personnel do not spend their time exclusively on MARPOL activities. We found that some spend less than half their time on MARPOL activities, while other personnel also perform MARPOL-related duties. However, we were unable to determine how much time, in the aggregate, the Coast Guard spends on MARPOL-related activities because its personnel do not regularly record their MARPOL-related activities. For example, MARPOL-related time charges actually reported by the marine safety information system for the 1-year period from July 1, 1993, to June 30, 1994, totaled just less than 12,500 hours (or a little over the work time of seven full-time equivalent positions). However, according to Coast Guard officials familiar with the system, few personnel strictly account for MARPOL time charges because such accounting has not been required and is viewed by personnel as burdensome. The Coast Guard believes a better estimate of the time devoted to MARPOL each year, including all enforcement and education activities, is about that of 61 to 66 full-time equivalent staff. However, the reliability of this estimate is suspect because it is based on an extrapolation of the time charges of just one marine safety office. There is no assurance that the situation at this office is representative of MARPOL enforcement activities at marine safety offices as a whole, particularly since we noted considerable differences in the level of MARPOL V enforcement activities among the offices we visited. The Coast Guard has determined that enforcement alone will not achieve compliance with MARPOL V, and enforcement for some sectors of the marine community is not viable. Therefore, the Coast Guard has embarked on an education and outreach effort to improve compliance. Since the early 1990s, the Coast Guard has conducted MARPOL education. Initially, education was focused on informing the maritime industry about the new MARPOL regulations as an extension of the Coast Guard’s enforcement activities—for example, handing out pamphlets and stickers to commercial shippers and port facility managers. The emphasis has now expanded to educate recreational boaters and the commercial fishing industry about maritime pollution. Because of the high numbers and dispersion of recreational boats and fishing vessels, enforcement through inspections, patrols, or similar means is nearly impossible. Education appears to be a reasonable strategy for this group and one supported by the Center for Marine Conservation. In March 1994, the Coast Guard’s existing education and outreach efforts were increased through a $1.28 million grant from the Department of Defense’s Civil Military Cooperation Action Program to begin a pilot program. The pilot, known as the SeaPartners Campaign, received another grant of $1.7 million for fiscal year 1995. The Coast Guard has applied for grant money again for fiscal year 1996, the last year that the pilot is eligible under this grant program. After 1996, the Coast Guard plans to fund the campaign internally. The Coast Guard initiated the SeaPartners Campaign with a wide range of activities. To design the campaign, the Coast Guard worked with the Center for Marine Conservation as well as many federal, state, and local agencies. In June 1994, Coast Guard headquarters sponsored training for active-duty personnel and reservists to help them undertake public outreach and education at their home units. Following the training, the participants returned to their home units and began educating a wide range of audiences, from other Coast Guard personnel to grade school children, on marine pollution. The Coast Guard estimates that by September 1994, the campaign had reached about 175,000 people through 1,180 separate activities in various parts of the country. During our visits to marine safety offices, we noted considerable support and enthusiasm for the SeaPartners program among Coast Guard personnel. The offices were supporting a wide variety of educational activities, and personnel said that they were receiving positive feedback from the public. Just how the program is contributing to MARPOL V enforcement is unknown, however, because no good measure of this has been developed. In 1994, the Coast Guard commissioned an outside evaluation of SeaPartners. The evaluation determined that while the campaign generated considerable activity, its mission needed to be clarified and its activities needed to be better targeted. While concluding that the pilot had great potential to make substantial contributions to protecting the marine environment, the report noted that SeaPartners was so broadly defined that there were “misperceptions, confusion, and a lack of common understanding about the program’s goals, objectives, and mission and appropriate ways to achieve the program’s intended outcome.” The report made 31 recommendations on ways to strengthen public outreach. The Coast Guard agreed with the report’s recommendations and has revised its strategy for SeaPartners in fiscal year 1995. The new strategy clarifies SeaPartners’ mission, targets activities toward more traditional port community audiences, and develops ways to measure the campaign’s effects. The Coast Guard has made progress in its enforcement of MARPOL V through heightened awareness at the unit level and the development of a broader-based education and outreach program. It still faces a number of formidable obstacles to further enhance enforcement in this area, however. We believe that improving the ability of its personnel to identify violations and better substantiate them in their enforcement actions is critical to achieving this end. Doing so would involve improving procedures for vessel inspections, establishing a better working relationship with APHIS, and providing useful and timely feedback to units. It is also important that the Coast Guard continue its efforts to improve its education and outreach program for MARPOL V and its management information system used to monitor the performance of field units in achieving MARPOL V enforcement. We recommend that the Secretary of Transportation direct the Commandant of the U.S. Coast Guard to do the following: Develop and put into force a standardized MARPOL V inspection checklist for use by its enforcement personnel. Doing so will improve the Coast Guard’s ability to identify and properly document violations. Develop procedures to ensure that case feedback from hearing officers, including the rationale for decisions made, is provided to districts and forwarded to local units in a timely manner. Explore with the Administrator of APHIS areas of mutual interest and ways to improve cooperation between the U.S. Coast Guard and APHIS on enforcing MARPOL V. We provided copies of a draft of this report to the Coast Guard and the Animal and Plant Health Inspection Service, U.S. Department of Agriculture, for their comments. We discussed the information in the draft report with Coast Guard officials, including the Chief of the Marine Environment and Protection Division. We also discussed the draft report with the Assistant to the Deputy Administrator, Plant Protection and Quarantine Service, Animal and Plant Health Inspection Service. These officials agreed with the facts as presented, but the Coast Guard contended that the content and tone of the draft did not give the Coast Guard adequate credit for the positive results that it has achieved and the efforts that it has made to improve the program. We have modified the final report where appropriate to recognize improvements to the program. The Coast Guard agreed with our recommendations for a standardized checklist and procedures to ensure case feedback from hearing officers. It disagreed with our proposed recommendation that the Secretaries of Transportation and Agriculture should intercede, if necessary, to ensure cooperation between the Coast Guard and APHIS. We agree that cooperation could be sought at a lower level and have revised the recommendation to encourage the Commandant of the Coast Guard and the Administrator of APHIS to explore ways to improve their cooperation. We conducted our work between June 1994 and April 1995 in accordance with generally accepted government auditing standards. During that time, we contacted Coast Guard field and headquarters offices, met with interested outside parties, and analyzed the Coast Guard’s violation data. Details of our scope and methodology are provided in appendix II. As agreed, unless you publicly announce its contents earlier, we plan no further distribution of the report until 7 days from the date of this letter. At that time, we will send copies to the Secretary of Transportation; the Commandant, U.S. Coast Guard; the Secretary of Agriculture; the Administrator of the Animal and Plant Health Inspection Service, U.S. Department of Agriculture; the Director, Office of Management and Budget; and other interested parties. We will make copies available to others on request. Please call me at (202) 512-2834 if you have questions. Major contributors to this report are listed in appendix III. Our analysis of MARPOL V enforcement data is drawn from two data sets. The first are enforcement data from cases initiated by marine safety offices (MSO). The second set are data for only those cases processed by hearing offices for civil penalty determination, but from all sources, including law enforcement and boating safety personnel. MSOs’ MARPOL V enforcement data span the period from October 1, 1991, to December 31, 1994 (fiscal years 1992, 1993, 1994 and the first quarter of 1995). A total of 725 enforcement cases were reported during this period. The sections below discuss the status of these cases as of February 15, 1995. The number of enforcement cases initiated by MSOs have generally followed an upward trend (see fig. I.1). The main exception was in the third and fourth quarter of 1993, when the number of cases initiated fell before starting back up in 1994. 31 Dec. 31 Dec. 31 Dec. Final action has been taken on 422 of the 725 enforcement cases (58 percent), while 303 remain in process. The greatest portion of completed cases (157 cases) were administratively closed by the MSO or district offices, or dismissed by the hearing office for insufficient evidence (see fig. I.2). Another 129 cases were referred to the responsible party’s flag state for action because U.S. jurisdiction could not be proven or was not applied. In only 9 out of the 129 referrals did the flag state ultimately fine the responsible party. A State Department official told us that flag state referrals are typically marginal cases that are short on evidence. Another 67 enforcement cases were closed with a warning letter to the responsible party, while 69 cases resulted in a penalty. An analysis of the 69 enforcement cases that have thus far resulted in a penalty shows that the average has generally risen from $4,250 in 1989 to $8,750 in 1994 (see fig. I.3). The increasing number of enforcement cases from 1989 to 1992 caused the total amount of penalties to increase. The drop in total penalties after 1992, shown in figure I.3, reflects the fact that a high percentage of cases initiated in 1993 and 1994 are still in process. Enforcement cases are not distributed evenly among district offices, as figure I.4 indicates. District 8, which includes MSOs bordering the Gulf of Mexico, has accounted for more than one-fourth of all cases. Districts 7 (Southeastern U.S. and Puerto Rico) and 14 (Hawaii and Guam) together have accounted for over 30 percent. Districts on the East (1 and 5) and West Coasts (11 and 13) have accounted for comparatively fewer, while inland districts (2 and 9) have accounted for the fewest number of cases. Most of the enforcement cases, as shown in figure I.5, were based on violations of regulations prohibiting the discharge of plastic or garbage. Less serious infractions, such as failure to post a garbage sign or maintain a waste management plan, have been cited less frequently. Civil penalties were assessed more often for cases where garbage or plastic was discharged. Of the 69 enforcement cases that resulted in a penalty, 84 percent (58 violations) were discharge cases. These 58 discharge violations also accounted for 98 percent of the total penalty dollars assessed. A majority of the enforcement cases involved ships licensed (or “flagged”) in other countries, although as figure I.6 also shows, 4 out of 10 cases were for U.S.-licensed vessels. The Coast Guard conducts more inspections of U.S. ships than foreign ships—38,303 boardings of U.S. ships versus 16,021 boardings of foreign ships in fiscal year 1993. Vessels flagged by Panama, Liberia, and the Bahamas, often referred to as flags of convenience because the owners are not citizens of the flag state, accounted for the highest percentages of foreign-flag enforcement cases. The Coast Guard’s Division of Maritime and International Law maintains its own set of MARPOL V enforcement data drawn from the marine safety information system. The division provided us with data covering all cases submitted to the Coast Guard’s three hearing offices in fiscal years 1992 through 1994. Legal staff use these data to monitor hearing offices’ disposition of civil penalty cases. Unlike the MSOs’ data, which are organized by enforcement case, these data are organized by citation charge, that is, the section of the regulation found to be in noncompliance. Some enforcement cases may involve more than one charge. For each charge, the data include the civil penalty amounts recommended to the hearing office by the district program managers, preliminary civil penalty assessment amounts set by hearing officers prior to a civil penalty hearing, and final civil penalty assessment amounts. Hearing officers may decide, on the basis of the evidence, to dismiss a charge, issue a warning letter, or impose a final penalty. In some instances, the party may decide to pay the preliminary penalty amount rather than going through the hearing process. We focused our analysis on those charges that hearing officers had closed in fiscal years 1992 through 1994, omitting any that were still in process. In all, 928 charges were resolved during the 3-year period. Of these, just over half resulted in a dismissal or warning letter (see fig. I.7). Of the remainder, equal percentages of charges (24 percent) were settled through a preliminary assessment or a final penalty assessment. Final Assessment (220) Warning Letter (379) Dismissed (106) Preliminary Assessment (223) Total recommended and preliminary civil penalty amounts increased substantially during fiscal years 1992-94 (see fig. I.8). In fiscal year 1994, for example, units recommended $753,746 in total civil penalties, a 73 percent increase over the previous year. By comparison, the final civil penalties assessed have not increased as dramatically—indeed, totals declined somewhat for fiscal year 1994. Examining the average civil penalty amounts for just the charges in which a penalty was imposed (excluding charges resulting in dismissal or a warning letter) also shows a substantial decline between recommended and final civil penalty amounts for fiscal years 1992-94 (see fig. I.9). For the 3-year period, the average final civil penalty was more than 50 percent less than the average penalty recommended to the hearing office. Hearing officers said that units establish civil penalty amounts strictly based on guidance from headquarters and without knowing the violator’s side of the story. The hearing officers, after reviewing rebuttals and other information from the vessel operator or owner, frequently reduce a civil penalty based on a much broader knowledge base than unit personnel have when they initially recommend a civil penalty. Average Penalty Per Charge (Dollars) To evaluate the Coast Guard’s efforts to enforce MARPOL V, we conducted work at Coast Guard headquarters and at numerous field locations. At Coast Guard headquarters in Washington, D.C., we interviewed and obtained documents from program managers in the Office of Marine Safety, Security, and Environmental Protection, the office responsible for implementing the Coast Guard’s MARPOL V program. We also met with Coast Guard officials in the Office of Chief Counsel, the Office of Navigation Safety and Waterway Services, and the Office of Law Enforcement and Defense Operations, which also are charged with enforcement responsibility. In the field, we visited 4 of the Coast Guard’s 10 district offices to understand their role in enforcement and all three of the Coast Guard’s hearing offices (Atlantic North, Atlantic South, and Pacific Area) to discuss the civil penalty process. We also visited 9 of the Coast Guard’s 47 MSOs, which are responsible for enforcing MARPOL V in U.S. ports. The MSOs we visited were judgmentally selected on the basis of MARPOL V case activity (high and low activity) and to achieve a broad geographical representation among offices on the East, West, and Gulf Coasts. At the MSOs, we participated in vessel inspections in addition to meeting with enforcement personnel. Table II.1 provides a list of the MSO and district offices visited as part of our review. District 1 (Boston) District 8 (New Orleans) District 11 (Los Angeles/Long Beach) District 13 (Seattle) A significant part of our evaluation consisted of analyzing the Coast Guard’s enforcement case data. Our analysis was based largely on data from the Coast Guard’s marine safety information system on MARPOL V enforcement cases initiated by MSOs for the period from October 1, 1991, through December 31, 1994. These data included the date, MSO, ship name, type of vessel, licensing country, type(s) of violation(s), current status and, if applicable, penalty amount for each violation case. Another set of MARPOL V data came from the Office of Chief Counsel and included only those enforcement cases that reached the hearing office for civil penalty determination in fiscal years 1992 through 1994. While excluding the significant number of cases that were closed or referred elsewhere before reaching the hearing office, it included cases reported by law enforcement and boating safety personnel. We did not audit the accuracy of any of the Coast Guard’s enforcement data, although we did attempt to eliminate duplicate entries and erroneous entries. Appendix I discusses the analysis of each of these sets of data. In addition to our work at the Coast Guard, we met with officials from other federal agencies and outside entities familiar with MARPOL V. We interviewed officials from the Department of Agriculture’s Animal and Plant Health Inspection Service, the Environmental Protection Agency, the Center for Marine Conservation, and the National Marine Board. We also reviewed reports on the MARPOL V program by the Department of Transportation’s Inspector General, outside consultants, and congressional committees. To assess the Coast Guard’s utilization of MARPOL personnel resources, we determined if the Coast Guard had assigned personnel to these dedicated positions and, to the extent possible, the range of their duties and responsibilities. We examined the Coast Guard’s records for indications of the amount of time spent on MARPOL-related activities. To describe the Coast Guard’s educational and outreach efforts pertaining to MARPOL V, we met with Coast Guard officials in headquarters and in the field. We identified the Coast Guard’s strategy for this effort and how it was being implemented. At the nine MSOs we visited, we reviewed the specific actions being taken in their educational outreach. We also discussed the Coast Guard’s efforts with the Center for Marine Conservation, which has been active in this area for many years. We reviewed an outside consultant’s report on the education program, discussed its findings and recommendations with responsible Coast Guard officials, and ascertained what the Coast Guard was doing in response. RESOURCES, COMMUNITY, AND ECONOMIC DEVELOPMENT DIVISION Paul Aussendorf Gerald Dillingham Steve Gazda Dawn Hoff Stan Stenersen Charles Sylvis Randy Williamson The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO provided information on U.S. participation in the International Convention for the Prevention of Pollution from Ships (MARPOL V), which provides for the mitigation of uncontrolled ocean dumping of garbage and plastics, focusing on: (1) the Coast Guard's progress in implementing MARPOL V; (2) whether enforcement personnel are being utilized for MARPOL-related purposes; and (3) the Coast Guard's educational and outreach efforts to improve compliance with MARPOL V. GAO found that: (1) the Coast Guard stepped up its MARPOL V enforcement efforts after congressional criticism in 1990 and 1992; (2) the number of cases involving MARPOL V violations has steadily increased from 16 in 1989 to 311 in 1994; (3) fewer than 10 percent of all cases have resulted in any penalties assessed to the violator, although many cases are still being processed; (4) although there are no accurate means to determine whether the Coast Guard is fully utilizing the additional resources that Congress provided for enforcing MARPOL V, nearly all the designated enforcement positions are filled; (5) in 1994, the Coast Guard's education and outreach efforts for MARPOL V expanded from targeting commercial shippers to other groups, such as boaters and fishermen; (6) the Coast Guard initiated the SeaPartners program to provide marine education to various participants, but it is unknown how the program will contribute to MARPOL V; and (7) the Coast Guard has revised its strategy for SeaPartners in fiscal year 1995 to clarify its mission. |
While PNRS, NCIIP, and CBI all provided federal funds for transportation infrastructure projects, they differed somewhat in their goals, methods used for selecting projects, and methods used for distributing the federal funds to states, as indicated in table 1. (See app. V for a list and description of the 153 projects funded by the three programs.) SAFETEA-LU authorized different funding levels for the three programs in each fiscal year of the 5-year authorization period, as shown in table 2; however, the amounts ultimately distributed to the states for those years were adjusted downward for several reasons. The funds authorized for these programs, which come from the federal Highway Trust Fund, represent funds that can be made available to the Secretary of Transportation, acting through FHWA, to carry out these programs. These funds are subject to limitation through the annual appropriations process and deductions may be made for rescissions, among other things. In fiscal years 2005 and 2006, the funding for these three programs was 14 percent less than the authorizations for those 2 years and in fiscal years 2007 and 2008 the funding was 8 percent less than the authorizations for those years. After funds are allocated for these programs and FHWA has reviewed project documentation for completeness and consistency with congressional language, funds may be obligated, or set aside, for the projects. These three programs, like most federal-aid highway programs, distribute federal funds by reimbursement to the states. States spend other funds for eligible project expenses and submit claims to FHWA for review and approval before they receive the federal funds under these programs as reimbursement. Before federal funds are distributed to a state for a project under these three programs, the state must submit a proposal for a PNRS or an NCIIP project, or a project eligibility form for a CBI project, to FHWA. FHWA compares information about the project against the project description included in SAFETEA-LU for PNRS or NCIIP projects and against eligibility criteria as defined in SAFETEA-LU for CBI projects. In addition, FHWA follows the normal steps for reviewing a project application for the use of federal-aid highway program funds. For example, FHWA ensures that the state agrees to apply federal laws as a condition of receiving funds under these and other federal-aid highway programs, such as the environmental assessment provisions of the National Environmental Policy Act (NEPA) and the Davis-Bacon Act’s prevailing wage requirements. SAFETEA-LU directed all of the PNRS and NCIIP funds to specific projects. SAFETEA-LU also contained other provisions that set forth a criteria-based, competitive process for selecting PNRS and NCIIP projects; however, this process was superseded by the congressional directives. According to SAFETEA-LU’s competitive process, PNRS projects selected for federal funding were to have national and regional significance and benefits that the act described as improving economic productivity by facilitating international trade and relieving congestion, among other things. The criteria for selecting NCIIP projects were that they be located in “corridors of national significance” and that their selection be based on the extent to which a corridor links two existing segments of the Interstate System, is able to facilitate major multistate or regional mobility, and promotes economic growth. Additional criteria for NCIIP funding included the value of commercial vehicle traffic cargo in the corridor and economic costs arising from congestion. Federal funds distributed through the CBI program to states had to be used generally for infrastructure or operational improvements on highways within 100 miles of a border with Canada or Mexico. In addition, states can transfer up to 15 percent or $5 million (whichever is less) of the state’s yearly amount of CBI funds to the General Services Administration (GSA), which owns and leases facilities at U.S. land border ports of entry. GSA can use these funds for CBI-eligible projects on its property. Border states can also propose to use CBI funds on projects located in Canada or Mexico that facilitate cross-border movement at an international port of entry in the border region of the state. States established goals for their projects to address capacity, congestion, economic and safety issues. According to the latest data available from FHWA, most PNRS, NCIIP, and CBI projects had been reviewed by FHWA, and funds had been distributed to states; however, some states had not initiated efforts to obtain federal funds for their projects under these programs. The federal contributions to estimated total project costs varied by program. States have used the program funds mainly for highway projects, although some rail and intermodal projects were funded under PNRS. Furthermore, states have used the project funds for various activities and purposes. The 14 states we reviewed established a variety of goals for the national and regional projects funded by the three programs. In broad terms, these goals included increasing transportation capacity, enhancing passenger and freight mobility, reducing congestion, promoting economic development, and improving safety. Table 3 identifies more detailed goals for some projects. As of December 2, 2008, FHWA had received project descriptions for and reviewed and distributed funds for most of the projects funded by congressional directive (46 of 55 projects) under PNRS and NCIIP, as shown in table 4. As of September 30, 2008, 14 of 15 border states had initiated efforts to obtain CBI funds by submitting required descriptions of proposed projects to FHWA. These 14 states had received funds for 98 CBI projects. Since SAFETEA-LU was passed in August 2005, FHWA has distributed most of the funds appropriated for these programs to the states for use on reviewed projects; however, FHWA has set aside, or obligated, only a portion of these funds for specific projects. As shown in table 5, as of September 30, 2008, FHWA had obligated nearly $1.2 billion, or about 33 percent of the $3.6 billion authorized under the three programs through that period. Although FHWA has obligated about a third of the authorized funds for reviewed projects, many of these projects are generally still in preliminary stages. As we have previously reported, FHWA has determined that it typically takes from 9 to 19 years to plan, gain approval for, and construct a new, major, federally funded highway project that has significant environmental impacts. As many as 200 major steps can be involved in developing such a project, from identifying the need for it to starting construction. While states have submitted complete project descriptions to FHWA for most projects and have received funds for them, some states have not done so, including the following: Three states had not submitted descriptions or requested funds for 3 of the 24 PNRS projects, as of December 2, 2008. Transportation officials in Michigan and Minnesota told us they were waiting to complete the environmental impact statement before submitting a project description and requesting PNRS funds for 2 of these projects (Blue Water Bridge Border/Port Huron Plaza project in Michigan and the Union Depot Multimodal Transit Facility in Minnesota). FHWA also did not receive a project description for the PNRS project involving improvements to I-80 in Pennsylvania. Three states and the District of Columbia had not submitted project descriptions or requested funds for 4 of 31 NCIIP projects, as of December 2, 2008. Officials we interviewed in two of those states offered varied reasons for not using the funds. For example, Arizona DOT officials said they did not submit a description for the State Route 85 project because they were trying to identify an appropriate project segment that could meet the NCIIP funding criteria. Wisconsin DOT officials told us they had not yet requested the NCIIP funds for the U.S. 41 project since the NCIIP funds do not have to be used by a specified date. In addition, FHWA has not received NCIIP project descriptions for the Frederick Douglas Memorial Bridge in the District of Columbia and I-80 improvements in Indiana. One of 15 border states (New Hampshire) had not used any of its distribution of CBI funds, as of September 30, 2008. An FHWA official told us that New Hampshire has only one border crossing, and it is not always open; therefore, the New Hampshire DOT is trying to identify a suitable project that meets CBI funding criteria. The federal share of contributions relative to the estimated total project costs varies widely between the PNRS and NCIIP programs and the CBI program, as shown in table 6. For example, under PNRS, the federal funding contributions represented less than 30 percent of the estimated total project cost for the majority of reviewed projects (i.e., for 15 of 19 PNRS projects). Under NCIIP, the federal funding contributions represented less than 30 percent of the estimated total project cost for about half of the reviewed NCIIP projects (i.e., for 13 of 27 NCIIP projects). The federal shares for the congressionally directed PNRS projects that received funding from FHWA varied widely, ranging from about 2 percent for the construction of I-73 between North and South Carolina to 104 percent for a project to relocate freight rail operations from El Paso, Texas, to New Mexico. In contrast, CBI funds represented 80 percent or more of the estimated total project cost for almost half (44 of 98) of reviewed CBI projects selected by the states. Generally, CBI program funds were often used by states for smaller-scope, lower-cost projects—such as resurfacing highway pavement, rehabilitating rest areas, refurbishing tollbooths, or installing guardrails. For high-cost projects—those whose estimated total costs equaled or exceeded $500 million (11 of 19 PNRS projects, 11 of 27 NCIIP projects, and 1 of 98 CBI projects)—PNRS funds averaged about 8 percent of estimated total costs, NCIIP funds averaged about 4 percent of estimated total costs, and CBI funds averaged about 13 percent of estimated total costs. For non-high-cost projects, the range and the average federal share of contributions as a percentage of estimated total project costs is similar for each program. Table 7 presents information on the range and average percentage of estimated total project costs provided by federal funds, by program. States have used the funds from the three national and regional programs mainly for highway projects. As shown in table 8, 137 of 144 total reviewed projects, or 95 percent, involved highways. While some sections of SAFETEA-LU restricted funds from all three programs to highway projects, another section of SAFETEA-LU directed some PNRS funds to nonhighway projects. (See app. V for complete lists of PNRS, NCIIP, and CBI projects.) These nonhighway PNRS projects included an intermodal project in Chicago (the CREATE program) and a rail project in New York (the Cross Harbor Freight Movement project). States have used their PNRS, NCIIP, and CBI project funds for a variety of activities, including conducting environmental studies, planning, preliminary engineering, design, right-of-way acquisition, and construction. Moreover, these project funds can be used for diverse purposes, such as expanding ongoing projects, covering cost increases or revenue shortfalls, or initiating projects and attracting nonfederal funds. The following examples from projects in table 3 illustrate how states have used their project funds: Oregon DOT officials told us PNRS funds enabled the state to undertake additional I-5 bridge repair projects beyond those possible with the previous level of state funding. Because I-5 is the only north-south interstate highway linking Oregon to California and Washington, upgrading the bridges is expected to improve the flow of freight through all three states. Connecticut DOT officials told us that NCIIP funds provide the necessary momentum to continue the Pearl Harbor Memorial Bridge project. Without these federal funds, the officials said, other transportation projects would have had to be postponed until Connecticut could finish this project. Officials stated that Connecticut actively seeks federal funding for large transportation projects so that it can direct state funds to other transportation projects. Finally, some states have used the program funds to initiate projects and attract other state and local funds. For example, the California DOT used a portion of its CBI funding to attract state funds for the Brawley Bypass project. According to California DOT officials, if federal funds had not been distributed to this project, it would have not have qualified for state funds—under California law, a project sponsor must obtain nonstate matching funds before it can obtain state funds—and the project would have been more difficult to complete. In discussing the three programs, stakeholders discussed a wide variety of both advantages and challenges, but they cited advantages less often than challenges. Specifically, in our interviews with 56 stakeholders, there were 47 instances in which stakeholders cited advantages of these programs and 66 instances in which they cited challenges. The advantages were primarily related to the benefits of the programs’ funding, while the more numerous challenges included funding issues but also addressed problems in complying with federal requirements and in not using the criteria-based competitive process established in SAFETEA-LU to select projects. When asked about the advantages of the three programs, the stakeholders we interviewed focused primarily on the funding the programs provided. (See app. II for a complete list of these advantages and the number of interviews in which each advantage was mentioned by a stakeholder group.) The most frequently cited advantage was the support the programs provided to initiate projects and to advance those that were already under construction. For example, as stated earlier, Connecticut DOT officials told us that NCIIP funds allowed them to continue work on the Pearl Harbor Memorial Bridge project without having to stop other transportation projects that would otherwise have had to be postponed until the bridge could be completed. The second most frequently cited advantage was the opportunity the programs provided to address high-cost projects and issues the stakeholders considered to be of national importance. For example, one stakeholder said that PNRS funding enabled it to address a high-cost project that required multiple funding partnerships, and another stakeholder said the CBI funding allowed it to undertake a project that serves regional and national needs by facilitating cross-border commercial truck traffic. Two additional advantages, both related to the programs’ funding, were the third most frequently cited. These included the direction of PNRS funds to nonhighway projects and the ability of the program funds to attract additional nonfederal funds, as follows: Stakeholders viewed the direction of some PNRS funds to nonhighway projects as an advantage in addressing some states’ transportation priorities because such projects would not otherwise have been eligible for PNRS funds under current law. Some stakeholders cited the ability of PNRS or NCIIP funds to attract additional nonfederal funds. For example, some stakeholders mentioned that because federal funds were directed toward a specific project, nonfederal funds were distributed by the state and local government to satisfy the state and local match requirements. While stakeholders cited some advantages, there were more instances in which stakeholders cited challenges associated with these three programs. (See app. III for the list of challenges and the number of instances that each challenge was cited in a stakeholder interview.) The challenges most frequently cited were related to funding, including the uncertainty of future federal funding, the relatively limited amounts of funding provided for large projects, and the impact of inflation. Funding uncertainty presents a challenge because almost all PNRS and NCIIP projects were funded below their full cost and project sponsors do not know whether they will receive additional federal funds beyond fiscal year 2009 to complete their projects. According to one stakeholder, states need a reliable funding stream in order to plan and obtain nonfederal funding. As a result, some stakeholders told us they planned to seek additional federal funds beyond fiscal year 2009 to complete their projects. The percentage of total estimated project costs provided by the three programs also presents a challenge to projects’ completion. As noted, under the PNRS program, the federal funding contributions represented less than 30 percent of the estimated total project costs for the majority of reviewed projects. Under the NCIIP program, the federal funding contributions represented less than 30 percent of the estimated total project costs for about half of the reviewed projects. For high-cost projects, PNRS funds averaged about 8 percent of the estimated total costs, and NCIIP funds averaged about 4 percent of the estimated total costs. According to some stakeholders, certain projects will be placed on hold unless they receive additional federal funds. Inflation poses a challenge because it reduces the value of the federal funds from these programs over time. One stakeholder reported that the rising cost of right-of-way acquisition has increased project planning uncertainty. Some stakeholders reported that inflation has also greatly increased the cost of construction materials over time. According to the Bureau of Labor Statistics, the producer price index for highway and street construction increased by about 41 percent from August 2005 to August 2008 (the latest month for which these data are available). The second most frequently cited challenge was difficulty in complying with federal requirements. For example, the stakeholders who cited compliance with federal and environmental requirements as a challenge noted the additional time and expense involved. In the view of some state and local transportation officials, these requirements may be too onerous to justify the use of the program funds. One stakeholder stated that the environmental review process, established under NEPA, takes a long time and that the Davis-Bacon prevailing wage requirements require higher- than-market wages, resulting in increased project costs. Stakeholders also reported that it can be difficult to obtain state and local funds to match the federal funds, as required. One stakeholder reported that it was still trying to obtain enough state and local funding to meet the matching requirements. The third most frequently cited challenge was not using the criteria-based competitive process in SAFETEA-LU to select PNRS and NCIIP projects. According to the stakeholders, it was difficult to determine whether the congressionally directed projects addressed national and regional priorities because the projects were not evaluated against the act’s criteria. For example, DOT officials said not using the criteria-based competitive process made it difficult to assess the national transportation system across modes to determine where strategic improvements should be made. According to our interviews with program stakeholders and our prior work on federal surface transportation programs, clearly defining the federal role in surface transportation is an important step toward focusing these three programs. Once the federal role has been clarified, two approaches that have been used in the past could be used to distribute federal transportation funds to projects that are consistent with that role— criteria-based competition or formula-based distribution. Both approaches have a range of characteristics; however, our interviews with stakeholders and our prior work suggest that a criteria-based competition could enhance these programs by targeting federal investments in accordance with a more clearly defined federal role and directing funds to stated program goals. In addition, Congress could still direct funds to specific projects as it did in two of the three programs. Some stakeholders we interviewed also suggested a wide range of both broad and specific program enhancements (see app. IV). Stakeholders from all the groups we spoke with for this engagement said that a clear definition of the federal role in transportation could help guide federal investments toward achieving national transportation priorities. Stakeholders mentioned several different ways the federal role could be better defined—from reducing the federal role in transportation infrastructure financing by giving more responsibility to individual states for the transportation system, to focusing more resources on fewer transportation programs, to concentrating federal resources on large transportation projects that affect multiple states. In our prior work, we have frequently called for more clearly defining the federal role in surface transportation. We have found that multiple federal roles can be inferred from the variety of surface transportation programs the federal government funds, but there is no single definition or set of priorities to use to focus federal surface transportation spending. In 2008, we called for a fundamental reexamination of the nation’s surface transportation system, noting that the federal goals are unclear, the federal funding outlook for surface transportation is uncertain, and the efficiency of the transportation system is declining. We have also found that the lack of a defined federal role in transportation is a reason why many current federal transportation programs are ineffective in addressing key transportation challenges, and we have identified federal transportation funding as a high-risk area. Additionally, in a May 2007 forum convened by the Comptroller General on transportation policy, participating experts stated that the nation’s transportation policy has lost focus and that a better definition of overall transportation goals is needed to better meet current and future infrastructure needs. The two primary approaches that are available and have been used historically to distribute federal funds to transportation infrastructure projects—criteria-based competition and formula-based distribution— have a range of characteristics that include both advantages and disadvantages. Table 9 shows the characteristics of each approach as identified by stakeholders we interviewed and through our prior work. Regardless of the approach selected, Congress could still direct funds to individual transportation projects as it did in two of the three programs. According to some stakeholders, congressional directives circumvent the established state transportation planning process and may indirectly divert nonfederal resources as states and others reprioritize their funds in order to use the directed federal funds. However, other stakeholders described congressional directives as a way to distribute federal funds more quickly than through a competition and as a way to provide funds for projects that might otherwise not receive funding through the established state transportation planning process. According to stakeholders we interviewed and our prior work, a criteria- based, competitive approach, such as the competitive process included in SAFETEA-LU for PNRS and NCIIP, could provide the best opportunity to enhance these programs by better targeting federal investments in transportation infrastructure. Such targeting is important for these three programs because they were designed to direct federal funds toward projects for enhancing transportation infrastructure that has national and regional impacts. While this approach has a range of characteristics, including some disadvantages, stakeholders stated that it allows each project to be evaluated on its merits, and it incorporates stakeholders’ views and input. We have previously testified that a fiscally sustainable surface transportation program will require targeted investments in the transportation system from federal and nonfederal stakeholders. Moreover, with regard to freight transportation, we recommended in our prior work that DOT define the federal role for the use of federal funds, establish clear roles for stakeholders, and focus federal funding to support the federal role in a cost-effective manner. In addition, we have found that having more federal programs operate competitively could help tie funds to performance. Canada’s Asia-Pacific Gateway and Corridor Initiative (APGCI) offers an example of how the three programs discussed here could be restructured as criteria-based, competitive programs. The Canadian government’s vision for its program is to invest in critical freight transportation projects that facilitate the movement of freight from Asia to Canada and through to the United States. Transport Canada, the federal Canadian government’s transportation agency, identifies key transportation projects through analytical studies or decides to fund projects submitted by provinces or towns using program criteria and freight transportation data. The criteria that were developed focused on objectives in support of the program’s vision, such as enhancing efficiency, safety, and security and minimizing environmental impacts. According to a Transport Canada official, using data on freight flows assisted Transport Canada in determining the extent to which specific projects would support international trade with Asia. The official further noted that the specific criteria enabled Transport Canada to take a rigorous approach, be selective, and thus deliver on the key objectives. Additionally, the official said, previous programs had less focused objectives, allowing a considerably wider variety of projects to be funded. Transport Canada works with public and private stakeholders to define what a project will entail, identify other nonfederal funding sources, complete a cost-benefit analysis, monitor the project, and evaluate the impact of the project after it is complete. Since October 2006, APGCI has leveraged a federal investment of $860 million into a total federal and nonfederal investment of $2.3 billion in 20 transportation projects. The federal share for these projects has ranged between 33 and 50 percent of total project costs. Our national transportation network faces many challenges. As demands for greater passenger and freight mobility increase and transportation infrastructure continues to show signs of age, fatigue, and congestion, governments at the federal, state, and local levels need to prioritize their limited resources to meet these demands. The three programs established in SAFETEA-LU were intended to address national and regional priorities by helping to fund a range of high-cost infrastructure projects or could not easily or specifically be addressed within existing federal surface transportation programs. As Congress prepares for the reauthorization of federal surface transportation programs in 2009, it will need to reexamine the relative contributions of these three programs and all other surface transportation programs to solving our nation’s transportation problems and achieving federal goals. With regard to PNRS and NCIIP, the relatively small federal share, especially for higher-cost projects, the number of projects, and the distribution of projects across the country, have raised concerns that the federal government did not maximize the impact of its limited transportation funds. We have similar concerns about the CBI program in that it was used by states for smaller-scope, lower-cost projects. In addition, some of the program enhancements mentioned by stakeholders could also improve all three programs. However, without a clearly defined federal role and a competitive, criteria-based process for distributing federal funds, it is unclear whether or how these programs can meet national or regional transportation priorities or maximize the benefits of investing increasingly scarce federal funds in our transportation infrastructure. In order to enhance these three programs, we concluded that Congress should consider taking the following three actions when considering the reauthorization of federal surface transportation programs: Define the federal role in surface transportation in accordance with the national and regional transportation priorities that these three programs are designed to meet. Implement a criteria-based, competitive project selection process for these three programs, in concert with other selection criteria. Work with the Secretary of Transportation to develop any specific program enhancements that could help these programs meet identified priorities and achieve the highest return on federal investments. We provided a draft of this report to DOT for review and comment. On January 22, 2009, we received comments on the report from DOT officials, including FHWA, FRA, and MARAD officials, in an e-mail from DOT’s Office of Audit Relations. The officials generally agreed with the information in this report and stated that the department would be happy to assist Congress as it considers the proposed matters. In addition, DOT provided technical clarifications, which we incorporated in the report as appropriate. We are sending copies of this report to congressional committees with responsibilities for transportation issues and to the Secretary of Transportation. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made key contributions to this report are listed in appendix VI. In this report, we assessed three federal transportation programs established by the Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users (SAFETEA-LU), enacted in August 2005, to target funds to infrastructure projects that have high costs, involve national or regional impacts, and cannot easily or specifically be addressed within existing federal surface transportation programs. The programs, administered by the Federal Highway Administration (FHWA), include the Projects of National and Regional Significance (PNRS), the National Corridor Infrastructure Improvement Program (NCIIP), and the Coordinated Border Infrastructure (CBI) program. As requested, we addressed the following questions: (1) What are the goals, funding status, and types of projects and activities funded for the three programs? (2) What advantages and challenges did stakeholders say were associated with these three programs? (3) What approaches are available for enhancing the three programs? In addressing these questions, our overall approach was to (1) review federal law, proposed regulations, FHWA’s program guidance and information, FHWA status reports on each program, and a Department of Transportation (DOT) report on the PNRS program; (2) review pertinent documentation, including some of the project proposals, plans, and information submitted to DOT for projects funded by these programs; and (3) interview officials from 56 “stakeholder” entities to understand the programs’ advantages, challenges, and possible enhancements. Stakeholders broadly have interest and expertise in one or more of the three programs, in a specific transportation project funded by one of these programs, or in federal surface transportation policy generally. The stakeholders we interviewed included officials from the following entities, which are also listed in table 10 at the end of this appendix: DOT headquarters in Washington, D.C., including the Office of the Secretary; FHWA; the Federal Railroad Administration (FRA); and the Maritime Administration (MARAD); as well as FHWA division offices in eight states, for a total of 12 DOT entities; and 16 state transportation departments, 16 local government agencies (including port authorities and metropolitan planning organizations), and 12 transportation associations or other expert organizations. We conducted some of these interviews as part of our site visits to eight states—California, New York, New Jersey, Connecticut, Illinois, Wisconsin, Washington, and Oregon—where we met with officials who manage projects funded through the three programs. In selecting our sites, we considered geographic diversity, the funding authorized by states for these programs, and the characteristics of the projects funded. The 16 state transportation departments we selected for interviews included 14 states that collectively accounted for 86 projects funded by the three programs and 2 states, Florida and Wyoming, that did not have projects funded by these three programs. Also, for comparison, we contacted Transport Canada, the transportation department of the federal Canadian government, and the Ministry of Transportation and Infrastructure of the Canadian province of British Columbia, to obtain information about similar infrastructure investment programs. In addition, to address the first question on funding status, we reviewed FHWA’s data on amounts authorized, appropriated, and obligated for PNRS, NCIIP, and CBI. To assess the reliability and quality of FHWA’s financial data, we analyzed related documentation and interviewed knowledgeable agency officials. Through these efforts, we determined that the data were sufficiently reliable for this report. We relied extensively on our interviews with transportation stakeholders and our prior work on surface transportation to identify not only the goals and types of projects and activities funded by these programs and the characteristics of individual restructuring approaches for them, but also a wide array of program enhancements. To address the second question on advantages and challenges, we analyzed our stakeholder interviews, and to respond to the third question, we relied on both our prior work and our stakeholder interviews to identify potential enhancements to the three programs. We conducted this performance audit from December 2007 to February 2009, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Finally, table 10 identifies the stakeholder entities included in our study. Number of instances in which the advantage was identified in an interview (number in parenthesis is the total number of interviewees in the group) DOT (12) States (16) Local governments (16) and experts (12) Total (56) Addressed high-cost projects and issues of national importance. Directed some PNRS funds to nonhighway projects. Made a broad array of project costs eligible for PNRS and NCIIP funds. Distributed CBI funds by formula or ability to use funds in Canada. Federal involvement helped enable interstate cooperation. Made DOT think system wide instead of locally. Funds did not reduce a state’s distribution of formula funds or funding for other high-cost projects. Allowed for geographically targeted funding. Congressional directives reduce time to get funds to projects. States could use other funds for the state and local match requirement. Established no maintenance of effort requirement for states. Number of instances in which the challenge was identified in the an interview (number in parenthesis is the total number of interviewees in the group) DOT (12) States (16) Local governments (16) and experts (12) Total (56) Funding issues (such as uncertainty, small funding amounts for large projects, and inflation). Criteria-based competitive process in SAFETEA-LU for PNRS and NCIIP was not used to select projects. States had to reprioritize projects to use program funds. Funds can only be used as indicated in the project description for PNRS and NCIIP congressionally directed projects. Use of cost-benefit analysis and performance measures is limited. Coordination among multiple stakeholders. Project descriptions were not submitted for some PNRS and NCIIP projects which delayed the release of funds. Number of interviews in which the enhancement was identified (number in parenthesis in the heading is the total number of interviewees in the group) DOT(12) (16) (16) (12) Implement PNRS and NCIIP as written in SAFETEA-LU using a criteria-based competition with DOT recommending to Congress which projects should be funded. Use cost-benefit analysis to evaluate projects before investment and performance metrics after investments. Make the full amount of the authorization available in the first year to get projects completed faster. Retain or increase the program’s ability to invest in different modes. Distribute more federal funds to the programs. Reduce the number of federal programs. Have different areas compete for different pots of funds to introduce more equity between different-sized states or metro areas. Establish a multimodal Highway Trust Fund account. Require projects to be included in federally mandated state and local transportation improvement plans. Reduce the amount of nonfederal matching funds required to obtain federal funds. Use full funding grant agreements to increase the certainty of federal funds for selected projects. Allow CBI funds to be used for environmental reviews and for multimodal projects and increase the amount of CBI funds that can be transferred to the General Services Administration (GSA) in any given year. Focus more on core federal-aid highway programs. Reduce federal rescissions to increase the amount of federal funds that will go toward the selected projects. Make some amount available for congressional directives. Fund fewer projects with the same amount of funds. Allow funds to be transferred between projects during the authorization period as long as the full authorized amount is allocated by the end of the authorization period to increase the flexibility of the funds. Use a consistent definition of the border area to ensure states use CBI funds consistently. Number of interviews in which the enhancement was identified (number in parenthesis in the heading is the total number of interviewees in the group) DOT(12) (16) (16) (12) Freight fees, taxes, or tolls could go to a commission that would identify freight projects. Allow more states to conduct environmental impact statements. High-cost projects need funding that spans acts. High-cost projects should submit finance plans. Increase the federal reimbursement rate to states. Provide incentives to consider more than just “pavement.” Establish an expiration date for federal funds to help ensure that projects with firm plans and nonfederal commitments are selected and to get projects completed faster. Appendix V: PNRS, NCIIP, and CBI Projects and Their Funding Not available. Not available. Not available. In addition to the individual named above, Rita Grieco, Assistant Director; Amy Abramowitz; Derrick Collins; Elizabeth Eisenstadt; Gregory Hanna; Carol Henn; Susan Irving; Bert Japikse; Thanh Lu; Sara Ann Moessbauer; Michelle Sager; and Laura Shumway made key contributions to this report. | To help meet increasing transportation demands, the Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users (SAFETEA-LU) created three programs to invest federal funds in national and regional transportation infrastructure. As requested, this report provides (1) an overview of the goals, funding status, and types of projects and activities funded by the three programs; (2) advantages and challenges identified by program stakeholders; and (3) potential program enhancements. GAO reviewed pertinent federal laws and rules; examined plans for selected projects; conducted site visits; and interviewed officials, stakeholders, and experts. The goals of the projects funded by the three national and regional infrastructure programs--Projects of Regional and National Significance (PNRS), the National Corridor Infrastructure Improvement Program (NCIIP), and the Coordinated Border Infrastructure (CBI) program--are varied, most projects have been reviewed and funded, most projects are for highway improvements, and funds have been applied toward various related activities. PNRS and NCIIP funds were distributed by congressional directive, and CBI funds were distributed by formula. The states GAO visited or whose officials GAO interviewed had established a variety of project goals, including increasing capacity and enhancing mobility. As of December 2008, the Federal Highway Administration had reviewed most projects submitted by states and had obligated $1.2 billion, or about 33 percent of the $3.6 billion authorized for the three programs through September 30, 2008. However, some states had not initiated efforts to obtain available funding. The officials GAO interviewed cited various reasons for not pursuing the funds, such as trying to complete an environmental impact statement and trying to identify a project that met the program's funding criteria. The programs' contributions to projects' estimated total costs varied, from less than 30 percent of the estimated total costs for the majority of reviewed PNRS projects and about half of the reviewed NCIIP projects to 80 percent or more of the estimated total costs for almost half of the reviewed CBI projects. Furthermore, for high-cost projects--those expected to cost over $500 million--the programs' funding contributions ranged from about 4 to 13 percent of the estimated total project cost. States have used the program funds mainly for highway projects and for various related activities, such as conducting environmental studies and expanding ongoing projects. In discussing the three programs, stakeholders cited advantages less often than challenges. The most frequently cited advantage was the funding the programs provided to support and move projects forward. The most commonly cited challenge also involved funding and included funding uncertainty. This was a challenge because project sponsors did not know whether they would receive additional federal funds to complete their projects--especially high-cost projects. According to GAO's interviews and prior work, clearly defining the federal role in surface transportation is an important step in enhancing these programs. Two historical approaches could then be used to distribute federal funds--a criteria-based competition or a formula-based distribution. GAO's interviews and prior work suggest that a criteria-based competition could enhance these programs. Some interviewees also called for a wide range of other enhancements, from broad proposals to increase investment in different transportation modes to specific suggestions, such as using cost-benefit analysis in selecting projects. The Department of Transportation generally agreed with the report's information and conclusions and offered to work with Congress on GAO's three proposed matters. |
Two large health programs—TRICARE and Medicare—influenced the design and operation of the Medicare subvention demonstration. The military health system has three missions: (1) maintaining the health of active-duty service personnel, (2) medically supporting military operations, and (3) providing care to the dependents of active-duty personnel, retirees and their families, and survivors. In fiscal year 1999, DOD’s annual appropriations included about $16 billion for health care, of which over $1 billion funded the care of seniors. In the mid-1990s, DOD implemented the TRICARE framework for military health care in response to rapidly rising costs and beneficiary concerns about access to military care. Its goals were to improve beneficiary access and quality while containing costs. TRICARE offers health care coverage to approximately 6.6 million active-duty military personnel, retirees, dependents, and survivors under age 65. These beneficiaries have three main options: TRICARE Prime, a managed care option; TRICARE Extra, a preferred provider option; and TRICARE Standard, a fee-for-service option. A new option, TRICARE Plus, allows beneficiaries to enroll with a primary care provider at participating MTFs. TRICARE covers inpatient services, outpatient services such as physician visits and lab tests, and skilled nursing facility and other post-acute care. It also covers prescription drugs, which are available at MTFs, through DOD’s National Mail Order Pharmacy, and at civilian pharmacies. TRICARE delivers care through over 600 MTFs—such as medical centers, community hospitals, or major clinics that serve military installations—and a network of civilian providers managed by DOD’s managed care support contractors. Managed care support contractors also assist beneficiaries and support regional DOD management by providing services such as enrollment and utilization management. There are about 1.5 million retired military personnel, dependents, and survivors age 65 or older residing in the United States. About 600,000 of these seniors live within 40 miles of an MTF. In the past, retirees had access to all MTF and network services through TRICARE until they turned age 65 and became eligible for Medicare, at which point they could only use military health care on a space-available basis—that is, when MTFs had unused capacity after caring for higher priority beneficiaries. In the 1990s, downsizing and changes in access policies led to reduced space- available care throughout the military health system. Moves to contain costs by relying more on military care and less on civilian providers under contract to DOD also contributed to the decrease in space-available care. As is the case today, MTF capacity varied from a full range of services at major medical centers to limited outpatient care at small clinics. Some retirees aged 65 or older relied heavily on military facilities for their health care, but most did not, and about 60 percent did not use military health care facilities at all. Retirees could obtain prescriptions from MTFs, but not from TRICARE’s National Mail Order Pharmacy or network of civilian pharmacies. In addition to using these DOD resources, retirees could receive care paid for by Medicare and other public or private insurance for which they were eligible. Significant changes in retiree benefits and military health care occurred in 2001 as a result of the NDAA. This legislation gave older retirees two major benefits: Pharmacy benefit. Effective April 1, 2001, retirees age 65 and older were given access to prescription drugs through TRICARE’s National Mail Order Pharmacy and at civilian pharmacies. TRICARE eligibility. Effective October 1, 2001, retirees age 65 and older enrolled in Medicare part B became eligible for TRICARE coverage— commonly termed TRICARE For Life. As a result, TRICARE is now a secondary payer for these retirees’ Medicare-covered services—paying most of their required cost-sharing. This includes copayments required of retirees enrolled in civilian Medicare managed care plans. Retirees are eligible to enroll in TRICARE Plus but are not allowed to enroll in TRICARE Prime. Medicare is a federally financed health insurance program for persons age 65 and older, some people with disabilities, and people with end-stage kidney disease. Eligible beneficiaries are automatically covered by part A, which covers inpatient hospital, skilled nursing facility and hospice care, as well as some home health care. They also can pay a monthly premium to join part B, which covers physician and outpatient services as well as those home health services not covered under part A. Traditional Medicare allows beneficiaries to choose any provider that accepts Medicare payment and requires beneficiaries to pay for part of their care. Most beneficiaries have supplemental coverage that reimburses them for many of the costs that Medicare requires them to pay. Major sources of this coverage include employer-sponsored health insurance; “Medigap” policies, sold by private insurers to individuals; and Medicaid, a joint federal-state program that finances health care for low-income people. The alternative to traditional Medicare, Medicare+Choice, offers beneficiaries the option of enrolling in managed care or other private health plans. All Medicare+Choice plans cover basic Medicare benefits, and many also cover additional benefits such as prescription drugs. Typically, Medicare+Choice managed care plans have limited cost-sharing but restrict members’ choice of providers and may require an additional monthly premium. Under the Medicare subvention demonstration, DOD established and operated six Medicare+Choice managed care plans, called TRICARE Senior Prime, at sites selected jointly by DOD and HCFA. Enrollment in Senior Prime was open to military retirees enrolled in Medicare part A and part B who resided within roughly 40 miles of a participating MTF. About 125,000 retirees were eligible for the demonstration. DOD capped enrollment at about 28,000 for the demonstration as a whole; each MTF had its own enrollment cap. In addition, retirees enrolled in TRICARE Prime who had a primary care provider at a demonstration MTF could “age in” to Senior Prime upon reaching age 65, even if MTFs’ enrollment caps had been reached. Senior Prime offered enrollees the full range of Medicare-covered services as well as additional TRICARE services, notably prescription drugs. It also gave them higher priority for care at MTFs than retirees who did not join the program. Enrollees paid the Medicare part B premium, but no additional premium to DOD. Care at MTFs was free of charge, but enrollees had to pay any applicable cost-sharing amounts when MTFs referred them to the civilian network for care (for example, $12 for an office visit). All primary care was provided at MTFs, but DOD purchased some hospital and specialty care from the civilian network. Purchased care was used for services not available at MTFs as well as when MTFs did not have sufficient capacity in particular specialties. Although the demonstration was authorized to begin in January 1998, implementation was delayed, and the first site began delivering care in September 1998. All sites were operational by January 1999. The six demonstration sites are in different regions of the country and include 10 MTFs that vary in size and types of services offered (see table 1), as well as by managed care penetration in the local Medicare market. The five medical centers offer a wide range of inpatient services and specialty care as well as primary care. They accounted for over 75 percent of all enrollees in the demonstration. The two San Antonio medical centers had 38 percent of all enrollees. The four community hospitals have more limited capabilities, and the civilian network provided much of the specialty care. At Dover, the MTF is a clinic that offers only outpatient services, thus requiring all inpatient and specialty care to be obtained at another MTF or purchased from the civilian network. The BBA established rules for Medicare to follow in paying DOD for Senior Prime care. It authorized Medicare to pay DOD in a way that was similar to the way it pays civilian Medicare+Choice plans, with several major exceptions: Senior Prime’s capitation rate—a fixed monthly payment for each enrollee—differed from the Medicare+Choice rate in several ways. The Senior Prime rate was set at 95 percent of the rate that Medicare would pay civilian Medicare+Choice plans in the demonstration areas, consistent with a belief that DOD could provide care at lower cost than the private sector. The rate was further adjusted by excluding the part of the Medicare+Choice rate that reflects graduate medical education (GME) and disproportionate share hospital (DSH) payments, as well as a percentage of payments made for hospitals’ capital costs. The GME exclusion took into account the fact that GME in the military health system is funded by DOD appropriations, and the DSH exclusion recognized that DOD medical facilities do not treat the low-income patients for whom DSH payments compensate hospitals. The law directed HCFA and DOD to determine the amount of the capital adjustment, and the two agencies agreed to exclude two-thirds of the capital costs reflected in the Medicare+Choice rate. The Senior Prime capitation rate was to be adjusted if there was “compelling” evidence that enrollees were healthier or sicker than their Medicare fee-for-service counterparts. The adjustment was intended to reflect whether Senior Prime enrollees would be expected to be significantly more or less costly than the average Medicare beneficiary. HCFA and DOD agreed that if the difference between the adjusted and unadjusted payments equaled or exceeded 2.5 percent, then that would be compelling evidence that enrollees’ health status differed from that of their Medicare counterparts. In that case, the Medicare payment would reflect the adjustment. The BBA required that, before DOD could receive Medicare payment, participating MTFs must spend as much on care for retirees age 65 and older as they did prior to the demonstration. This threshold amount— termed DOD’s baseline level of effort or LOE—was intended to prevent the federal government from paying for the same care twice, through both DOD appropriations and Medicare. The total amount that Medicare could pay DOD for the demonstration was capped at $50 million in 1998, $60 million in 1999, and $65 million in 2000. The demonstration was initially scheduled to end in December 2000. The NDAA extended the demonstration for 1 year—through 2001—with the possibility of further extension and expansion. However, DOD allowed Senior Prime to end on December 31, 2001, because the new TRICARE For Life program provides health care coverage to older military retirees. DOD has stated that Senior Prime enrollees will have priority for enrollment in TRICARE Plus, which began at the former demonstration MTFs in January 2002. As authorized by the BBA, the demonstration was to include a second component—Medicare Partners. Under Medicare Partners, a demonstration MTF would be allowed to contract with civilian Medicare+Choice plans to provide selected MTF services to military retirees enrolled in the civilian plans. According to DOD, lack of interest among local Medicare+Choice plans was key to its decision not to implement the Medicare Partners program. Plans may have had little incentive to participate in Medicare Partners and pay for MTF care because retirees already were eligible for such care at DOD’s expense— when space was available. The demonstration showed that DOD health care plans based at MTFs could attract many retirees, particularly those who were recent users of military care. Retirees said they were attracted to Senior Prime by the quality and convenience of MTF care, as well as by the program’s low cost- sharing. After enrolling, most reported that they were able to get the care that they needed at little expense. Most retirees who did not enroll in Senior Prime reported that they were satisfied with their existing health care coverage. Senior Prime’s enrollment showed that there was substantial demand among retirees for DOD health care plans based at MTFs, and also that demand varied by site. By December 2000, Senior Prime had attracted roughly 33,000 enrollees—over one-fourth of all retirees eligible to join. (See table 2.) Over 6,500 of these enrollees had aged-in from TRICARE Prime after turning age 65. The percentage of eligible retirees who enrolled varied significantly, from 14 percent at San Diego to over 40 percent at Keesler and Lackland Air Force Base. However, these figures understate retirees’ interest in Senior Prime: during the demonstration, 6 of the 10 MTFs reached their maximum enrollment and had to establish waiting lists. Senior Prime’s strong link to military care was particularly attractive to retirees. When asked why they wanted to join Senior Prime, enrollees most often cited reasons related to military care, such as the quality of care at MTFs, a preference for military care, and the convenience of local MTFs. (See table 3.) Most enrollees had used MTFs to some extent the year before enrolling in the program, and about 60 percent had relied on these facilities for most or all of their care. In part, this reflected the design of the program. To be eligible for Senior Prime, retirees must have used military care since becoming Medicare-eligible. However, DOD relied on retirees’ answers to a question about prior MTF use and did not verify their answers. Over half of enrollees believed that by joining Senior Prime they would be able to get appointments at MTFs more easily. This is not surprising, given that Senior Prime offered retirees the same priority access to MTFs as younger retirees enrolled in TRICARE Prime. Senior Prime attracted some retirees—about 3,500—who had not recently used MTFs; most of these retirees nonetheless cited a preference for military care. Retirees who were attracted to Senior Prime varied in their health care coverage before the demonstration. About 30 percent had had traditional Medicare exclusively. The remainder had had supplemental insurance coverage in addition to traditional Medicare or were enrolled in a civilian Medicare managed care plan. Although less important than the link to military care, other features of Senior Prime also appealed to retirees. The program’s low cost-sharing was attractive to retirees; about half of enrollees saw joining Senior Prime as a way to save money on health care expenses. This was true even though many enrollees had only minimal out-of-pocket costs before joining the program, due in part to their use of free MTF care. In addition, about half of enrollees saw joining Senior Prime as a way to obtain improved health care benefits or coverage. After enrolling in Senior Prime, retirees reported that they were able to get the care that they needed at little expense. When asked what they liked about Senior Prime, the majority of enrollees cited access-related features such as the ability to get all the care that they needed and the ability to get appointments when needed. (See table 4.) This is not surprising, given that enrollees had more hospital stays and outpatient visits than before the demonstration and used significantly more services than their Medicare fee-for-service counterparts. Enrollees also reported that they received good care at their MTFs and that they liked their MTF doctors. Despite their heavy use of services, most enrollees also were pleased with the low cost of their care. They reported few financial barriers to obtaining care and that their spending on health care services was minimal. About two- thirds of enrollees reported no out-of-pocket costs; their costs were low even at smaller sites where network care, which required copayments, was more common. Once enrolled, relatively few retirees decided to leave Senior Prime— another indication of enrollees’ satisfaction with the program. Early in the demonstration, disenrollment rates were relatively low compared with other Medicare managed care plans. Disenrollment remained low throughout the demonstration, averaging about 2 percent during the last year of the initial demonstration period. Although retirees generally were positive about Senior Prime, some reported difficulties. Over 70 percent of enrollees reported that there was nothing about the program that they disliked. Very few enrollees reported that they did not like their doctors, that they did not get good care at MTFs, or that Senior Prime refused them treatment. However, 13 percent of enrollees reported that they did not like having to wait too long to get an appointment, 13 percent cited not being able to see the same primary care doctor every time, and 8 percent cited difficulty making appointments. In addition, among those few who disenrolled from Senior Prime, the most commonly cited reasons for doing so were these same three access-related difficulties as well as the inability to use regular Medicare benefits while enrolled in the program—that is, the inability to have Medicare pay for services not authorized by Senior Prime. Most retirees who did not enroll in Senior Prime reported that they were already satisfied with their existing health care coverage, and few cited negative attitudes about military care. When asked why they did not try to enroll in Senior Prime, over 60 percent of nonenrollees cited satisfaction with their current coverage. (See table 5.) About one-third said they did not have enough information about Senior Prime or did not understand it. Although the sites used many means of providing information about Senior Prime to local retirees, many retirees surveyed early in the demonstration had not previously heard of the program. The lack of information about Senior Prime remained an issue later in the demonstration as well; at the end of the demonstration, many retirees still reported this as one reason for not wanting to enroll. Other major reasons for not enrolling included not wanting to join a managed care organization and the belief that Senior Prime might not be permanent. Few nonenrollees—about 9 percent—reported that they decided not to join Senior Prime because they disliked military care. Nonenrollees’ access to care was generally unaffected by the demonstration, but among the minority who had previously relied on military care, most experienced reduced access to MTFs. When asked at the start of the demonstration why they had not joined Senior Prime, many of the nonenrollees—almost 40 percent—who were later “crowded out” of MTFs had said that they were able to get military health care when they needed it. This suggests that they did not forsee that space-available care would decline as a result of the demonstration. By the end of the demonstration, about 20 percent of those who were crowded out had tried to join Senior Prime. However, most sites had reached their enrollment caps, and retirees who applied after the caps were reached were placed on a waiting list. While the demonstration had positive results for enrollees, it also highlighted several challenges that confront the military health system in managing patient care and costs. The high costs generated by enrollees’ care revealed the need to deliver care more efficiently. In addition, difficulties encountered in obtaining and managing data during the demonstration underscored problems that DOD officials generally face in monitoring patient care and costs. Finally, the demonstration illustrated the tensions between the military health system’s commitment to care for active-duty personnel and support military operations and its commitment to provide care to civilian family members and retirees. Senior Prime’s experience revealed the need to deliver care more efficiently, and differences in sites’ utilization suggested that this might be possible. Although DOD satisfied its new senior enrollees and gave them good access to care, it incurred high costs in doing so. These high costs were largely due to enrollees’ heavy use of medical services, which substantially exceeded that of comparable Medicare beneficiaries. If DOD had delivered fewer services, it is possible that enrollees would have been less satisfied. However, we found that the number of outpatient visits by enrollees affected their satisfaction with care only slightly. Furthermore, substantial site differences in utilization—with little difference in enrollee satisfaction—provide evidence that some sites were able to satisfy enrollees with fewer services and, consequently, lower costs. This suggests that other sites could have reduced utilization somewhat without sacrificing enrollee satisfaction. Although sites’ costs varied, managers at all sites faced similar disincentives to containing utilization and costs. MTFs generally tried to restrain inappropriate utilization, but basic features of the military health system’s financial and management practices weakened their incentives to moderate utilization and costs. First, while MTFs cannot spend more than their budget, several factors act as safety valves for budgetary pressure: The primary factor is space-available care: when resources required for enrollees increase, space-available care declines and those who are not enrolled are less able to get MTF care. This was observed during the demonstration: as Senior Prime enrollment climbed, the amount of space- available care provided to nonenrolled seniors decreased. (See figure 1.) MTFs can request supplemental funding from their respective services. During the demonstration, every MTF requested supplemental funding either for Senior Prime specifically or for the MTF generally, and all received some added funds. Although MTFs cannot always count on receiving such funding, the potential to obtain extra funds reduces incentives for moderating utilization. MTFs can try to defer some utilization until the following fiscal year—for example, by postponing elective surgery or issuing prescriptions on a 60- day rather than a 90-day basis. At the end of fiscal year 2000, officials from several sites told us that they were considering this approach to staying within their budgets, and at the time of our visits at least one had implemented it. Second, MTFs have no direct financial incentive to manage care purchased from the civilian network. At the local level, MTF providers refer patients for services that, depending on MTF resources and capacity, may be obtained from network providers. However, MTFs are not directly responsible for the costs of network claims; DOD funds purchased care centrally, thereby reducing sites’ incentive to trim unnecessary network utilization. An additional factor unique to the demonstration was the lack of incentives for the managed care support contractors to limit utilization in Senior Prime. Under the demonstration, these contractors authorized network services but bore no risk for the costs of enrollees’ care. Consequently, they had no financial incentive to limit use of specialists and other civilian network providers. Third, Senior Prime’s low cost-sharing, although beneficial for enrollees, limited DOD’s ability to control utilization and costs. Research has shown that patients tend to use more care when their out-of-pocket expenses are low. Therefore, copayments tend to encourage patients to curb their use of health care services. In Senior Prime, however, there were few financial incentives for enrollees to reduce their use of health care services. Enrollees had no annual deductible; furthermore, care within MTFs, where most services were delivered, was free and copayments for visits to network providers were small. Finally, practice patterns among military physicians may also explain part of the high costs and utilization seen in Senior Prime. High utilization is not unique to the demonstration: studies have shown that the military health system has higher utilization than the civilian sector. As with civilian physicians, military physicians’ training, experience, and the practice style of their colleagues affect their use of procedures and tests, their readiness to hospitalize patients, as well as their recommendations to patients about follow-up visits and referrals to specialists. Although DOD was able to establish and operate the demonstration, its efforts were hampered by limitations in its data and data systems. Throughout the demonstration, officials had difficulty producing reliable, timely, and comprehensive information on retirees’ care. This hampered their ability both to implement the demonstration’s payment mechanism and to monitor enrollees’ health care costs and utilization. DOD’s experience with the demonstration’s payment mechanism illustrated DOD’s problems with data and data systems. At the beginning of the demonstration, DOD needed to determine the cost of the care that participating MTFs had provided to military retirees prior to Senior Prime—an amount referred to as DOD’s baseline level of effort or LOE. This step was critical in determining how much payment, if any, DOD would earn from Medicare. However, DOD’s data systems did not permit it to isolate the costs of retirees’ previous MTF care, and DOD had to undertake a substantial effort to estimate its baseline LOE—an effort made more difficult by deficiencies in the source data on MTF costs. The payment mechanism also required DOD to collect information on enrollees’ inpatient and outpatient diagnoses to determine whether enrollees were significantly more or less healthy than other Medicare beneficiaries—in which case, Medicare’s payment to DOD would be adjusted. DOD and HCFA agreed to use a method of assessing enrollees’ health status that involved both inpatient and outpatient data. DOD took over 1 year to assemble the final data and later stated that the outpatient data may have omitted certain items and may have contained coding errors. Overall, although DOD completed the tasks necessary to implement the payment mechanism, its efforts consumed considerable time and resources due to data problems. DOD’s data systems were not well-suited to monitoring health care costs and utilization—an impediment to effective management. At the local level, data limitations reduced site officials’ ability to monitor Senior Prime costs. At first, the sites operated with little information on the costs of enrollees’ care. For care provided at MTFs, sites’ data systems could not isolate costs specific to Senior Prime enrollees. For care provided outside MTFs, claims submitted by network providers recorded the costs of civilian care, but there were delays between the time services were provided and when complete claims data were available. About 1 year into the demonstration, cost information available to site officials improved. In the fall of 1999, DOD’s TRICARE Management Activity (TMA) office began distributing periodic Senior Prime databooks, which provided information on enrollment, utilization, cost, and satisfaction for each site. Sites found that these databooks were a useful resource; for the first time, they were able to compare their sites’ costs to the Senior Prime capitation rate. However, neither the databooks nor the systems on which they were based permitted the sites to identify the cases or practices that led to high costs. Moreover, the information was not timely—the lag was usually 6 months or more—and changed over time as problems in underlying data and calculations were identified and corrected. For example, the databook reports on the costs of enrollees’ care changed repeatedly as mistakes were uncovered and corrected, reducing confidence in comparisons to the Senior Prime capitation rate. Data limitations also hindered officials’ ability to monitor enrollees’ use of health care services. Sites had information on utilization, but had difficulty integrating data from MTF and network providers and encountered data of questionable accuracy. These problems undermined the ability of managers and physicians to obtain a comprehensive picture of the care provided to individuals or to groups of patients. In addition, site officials told us they had some difficulties using benchmark utilization rates from civilian managed care to help understand the patterns in Senior Prime utilization. They were sometimes uncertain about the quality and credibility of the underlying data used to generate Senior Prime measures, and often found that comparisons between Senior Prime and civilian rates were distorted by differences in clinical and coding practices.Comparisons between the sites were also problematic. Some officials cited differences in coding practices as a partial explanation of site differences in utilization rates. While DOD is making efforts to improve its data and data systems, its fundamental data problems are pervasive and persistent. Key data-related difficulties include inaccurate and incomplete data, systems that produce usable data only after substantial delays, and the inability to segregate costs for particular patient groups, such as seniors. In addition, DOD’s separate, unconnected systems for recording inpatient and outpatient MTF care, and for MTF and network care, complicate data collection and analysis. Most important, the lack of strong incentives for MTFs to achieve efficiency in delivering care reduces officials’ demand for improved data and related tools. Officials told us about efforts to improve data and data systems, some resulting directly from the demonstration. The demonstration’s requirements for reporting quality and cost information, including the need for MTF commanders to certify data submitted to HCFA, led to increased scrutiny of data systems by national and local managers. Officials at several sites noted that the demonstration had stimulated MTF efforts to generate better data, for example, by more accurately recording and coding patient visits and diagnoses. In addition, DOD’s new Data Quality Management Control program, initiated in November 2000, introduced data quality as a formal management objective and made MTF commanders more accountable for their data.It is too early to tell whether DOD’s recent efforts to make MTFs more accountable for data quality will have an impact that is systemwide and sustained. Although the new data quality program may give MTF managers added reason to improve their data, it does not alter their incentives for using those data. The demonstration illustrated a central challenge confronting the military health system: dealing with the tensions that arise from its commitment to support military operations and care for active-duty personnel while providing care for their family members and retirees. As part of its mission, the military health system is responsible for medical support of military deployments, from small humanitarian engagements to major military actions. The military health system must ensure that clinicians and other medical personnel have the skills they need when deployed and must maintain the health of active-duty personnel. Like other large employers, DOD also provides health care coverage for the families of active-duty personnel and for retirees. Unlike most other employers, DOD provides much of its beneficiaries’ care in its own facilities. Overall, MTFs’ experiences during the demonstration highlighted ways in which the provision of care to civilians, in particular older retirees, can both support and hinder the military mission. It also illustrated the ways in which that mission complicates the delivery of civilian care. Senior Prime demonstrated that providing care to civilian beneficiaries can contribute to the mission of providing medical support for military operations. According to DOD, during wartime and peacetime military operations (such as humanitarian or peacekeeping missions), most cases encountered are commonplace medical or surgical conditions, not complex illnesses or injuries requiring specialized skills. Consequently, clinicians with broad general training and experience are able to manage most conditions they are likely to see. However, clinicians supporting military operations are likely to encounter some complex medical and surgical cases. They therefore need experience with patients requiring complex care—rather than young, generally healthy adults and children requiring routine care—to ensure that they are prepared to provide complex care in the field. Senior Prime illustrated how seniors can contribute to the skills needed for deployment. MTF officials reported that enrollees gave medical staff experience with conditions that are relevant to both wartime and peacetime operations but are not typically seen among younger patient groups. Although the underlying causes of illness and injury differed from what would occur on the battlefield, seniors’ needs for complex care, such as vascular and orthopedic surgery and intensive care, helped prepare staff to treat complex cases while deployed. Treating seniors also prepared staff for humanitarian missions, on which they may encounter individuals who are older or who have chronic conditions. However, as Senior Prime also demonstrated, providing civilian care can interfere with an MTF’s efforts to meet its military medical mission. Not all services provided to civilians contribute directly to providers’ preparedness for deployment. For example, according to officials at one MTF, under Senior Prime some specialists were providing more routine care to seniors and seeing fewer of the complex cases important for training, compared to before the demonstration. In addition, MTFs’ responsibility for primary care influenced the selection of medical staff for deployments. Several MTFs chose to deploy specialists or others who were not primary care managers, rather than disrupt primary care teams and patients. In this way, civilian care posed a constraint for officials in meeting their primary mission. Finally, increased demands for care among civilian beneficiary groups have the potential to affect the care of active- duty personnel—the primary population that the military health system is intended to serve. Although active-duty personnel receive priority for MTF care, the assignment of MTF appointment slots to civilians can affect how quickly active-duty personnel get care. During the demonstration, officials found little evidence that, at its small scale, Senior Prime had led to a decline in active-duty personnel’s access to care or satisfaction with care. However, several officials either expressed concern that continued growth in the program could cause difficulties in the future or noted the strain resulting from MTFs’ commitment to both active-duty and other patient groups. Conversely, the demonstration illustrated ways in which the military mission complicates civilian care and can increase costs. Medical personnel absences due to deployments, readiness training, and rotations complicated MTFs’ efforts to ensure enrollees’ access to and continuity of care, although the extent varied by site. During the demonstration, MTFs experienced temporary shortages in personnel important for seniors’ care, including nursing staff and key specialists. Officials took steps to mitigate the effect of these absences on patient care, and enrollees had good access to care overall. However, they were not always able to see the same provider and at times were referred to civilian providers. Personnel absences had implications not only for patient care but also for DOD’s costs, particularly when care had to be purchased from network providers. These costs could be significant if personnel absences occurred in large numbers or were extended over a long period. While the demonstration showed that DOD’s new MTF-based health plans could attract and satisfy military retirees, it also highlighted challenges that DOD encountered in doing so. The issues DOD encountered in launching and implementing Senior Prime leave open the question of whether the program could have been successfully implemented on a larger scale. Although DOD has chosen not to continue Senior Prime, the demonstration offers lessons about managing the care of seniors and other beneficiary groups. The challenges revealed by the demonstration relate to DOD’s management of health care delivery and costs within the broader military health system: The high utilization and costs observed during the demonstration underscore the importance of designing incentives and management practices within DOD that promote efficient care—that is, the delivery of appropriate care and improved health outcomes while discouraging inappropriate utilization and costs. As the demonstration illustrated, limitations in DOD data and information systems, as well as weak incentives for greater efficiency, are obstacles to managing military beneficiaries’ health care use and costs. Data analysis could help managers target clinical and financial areas needing improvement. The demonstration highlighted a strategic issue facing the military health system: how to reconcile its commitment as an employer to provide care to the families of active-duty personnel as well as retirees with its responsibility to provide medical support for military operations. We provided DOD and CMS an opportunity to comment on a draft of this report, and both agencies provided written comments. DOD said that the report identified some of the challenges it faced in implementing and managing the demonstration and that the report appropriately noted limitations in the generalizability of its findings. DOD commented that one statement—that difficulties in producing information on retirees’ care hampered its ability to implement the demonstration’s payment mechanism—was only partially true and somewhat misleading. DOD asserted that the Senior Prime databooks were reasonably timely and reliable and that, once DOD and CMS had agreed on financial policies, the payment mechanism was implemented without significant difficulties. In response to our statement that DOD took over 1 year to assemble the data needed for risk adjustment, DOD emphasized that delays in the risk adjustment process were largely beyond its control. Regarding our statement that DOD’s data systems were not well-suited to monitoring health care costs and utilization, DOD stated that its data systems, although not capable of providing all data that might be desired, adequately showed that utilization and costs were high. DOD further stated that high costs and utilization are more attributable to the benefit structure, financial incentives for MTFs, high administrative costs, and MTF practice and capacity issues than to data system weaknesses. Finally, in response to our statement that limitations in DOD data systems are obstacles to managing military beneficiaries’ health care use and costs, DOD stated that, while it is true that MTFs have weak incentives for greater efficiency, the focus on information systems as a primary cause of high costs and utilization is misleading. DOD said that data analysis targeted clinical and financial areas needing improvement early in the demonstration, but noted that systematically responding to clinical and financial issues across multiple services and MTFs is still a problem. As noted earlier, the Senior Prime databooks were a useful source for site officials in monitoring sites’ performance. However, sites did not start receiving the databooks until about a year into the demonstration, and lags affecting the databooks’ information limited their usefulness. Moreover, frequent changes in reported costs reduced site officials’ confidence in the data. Regarding the demonstration’s payment mechanism, it required DOD to collect information on enrollees’ inpatient and outpatient diagnoses before the risk adjustment process could begin. Assembling the data was DOD’s responsibility and under its control. We cited the time and effort required for DOD to assemble the data as an illustration of its broader difficulties with data and data systems. Concerning DOD’s data and data systems, although they showed that the demonstration was generating high costs and utilization, neither the Senior Prime databooks nor the systems on which they were based permitted the sites to identify cases or practices that led to high costs. Finally, we do not cite data system limitations as a primary cause of Senior Prime’s high costs and utilization. However, as the demonstration showed, DOD’s data limitations are obstacles to managing patient care and costs. CMS said that the report was accurate and met its objectives. CMS provided technical comments, which we incorporated where appropriate. (DOD’s and CMS’s comments appear in appendixes III and IV, respectively.) We are sending copies of this report to the secretaries of defense and health and human services and the administrator of the Centers for Medicare and Medicaid Services. We will make copies available to others upon request. If you or your staffs have questions about this report, please contact me at (202) 512-7114. Other GAO contacts and staff acknowledgments are listed in appendix V. In directing us to evaluate the demonstration, the BBA specified that we study three broad areas: the demonstration’s effects on beneficiaries, its costs to DOD and Medicare, and difficulties that DOD encountered in managing the demonstration. To address these topics, we surveyed retirees living in the demonstration areas, visited the demonstration sites, interviewed DOD and HCFA officials, and analyzed administrative data and reports from both agencies. To determine the demonstration’s appeal to and effect on military retirees, including why they chose to enroll and their satisfaction with care, we conducted a two-phase mail survey of about 20,000 retirees living in the demonstration areas. The survey was sent to Senior Prime enrollees and to retirees who were eligible for Senior Prime but did not join. We surveyed retirees at the beginning of the demonstration to collect information on their health care experiences before Senior Prime. Toward the end of the initial demonstration period, we resurveyed these retirees to measure changes from their earlier reports. In this second phase, we also surveyed those who had joined Senior Prime since the first survey and those who had become eligible for Senior Prime but had not joined. To collect information on the demonstration’s implementation and operation, we interviewed officials and reviewed documents that we obtained during two rounds of visits to the demonstration sites. We first visited the sites within 3 months after each had begun operations to assess their status during the start-up phase and to examine the issues that had emerged in planning and implementing Senior Prime. We conducted follow-up visits about 15 months later. This allowed us to observe the sites at a more mature stage. We examined the demonstration’s status, effects on beneficiaries and providers, and other key management issues. We also conducted additional interviews with DOD and HCFA officials. To evaluate retirees’ health care use and costs under the demonstration, we conducted several analyses using administrative data from DOD and HCFA. In analyzing utilization, we compared enrollees’ use of services with that of Medicare fee-for-service beneficiaries in the same areas, adjusting for the relative health of the two populations. To determine the demonstration’s impact on the cost to DOD of caring for military retirees, we compared average monthly costs for Senior Prime enrollees to the Senior Prime capitation rates. Senior Prime attracted a substantial number of retirees who had been enrolled in other Medicare managed care plans just prior to enrolling in Senior Prime. Overall, about 10,000 seniors left other plans to join Senior Prime—about 40 percent of all seniors who enrolled in the program in 1998 and 1999. This percentage varied by site, in part due to local variation in Medicare managed care plan availability. Some sites, such as San Diego and San Antonio, were located in areas with significant Medicare managed care presence. Other sites, such as Texoma and Keesler, were located in areas where retirees generally had few or no other Medicare managed care options. Table 6 provides site-level information on Senior Prime enrollees drawn from other plans. In most cases, plans lost a small number of their members, but one plan lost over 3,400 members—about 4 percent of its members who lived in that subvention area. In addition to those named above, Robin Burke, Martha Wood, Jessica Farb, Maria Kronenburg, Gail MacColl, Dae Park, Lisa Rogers, and Eric Wedum contributed to this report. Medicare Subvention Demonstration: DOD Costs and Medicare Spending (GAO-02-67, Oct. 31, 2001). Medicare Subvention Demonstration: Greater Access Improved Enrollee Satisfaction but Raised DOD Costs (GAO-02-68, Oct. 31, 2001). Medicare Subvention Demonstration: DOD’s Pilot HMO Appealed to Seniors, Underscored Management Complexities (GAO-01-671, June 14, 2001). Defense Health Care: Observations on Proposed Benefit Expansion and Overcoming TRICARE Obstacles (GAO/T-HEHS/NSIAD-00-129, Mar. 15, 2000). Medicare Subvention Demonstration: Enrollment in DOD Pilot Reflects Retiree Experiences and Local Markets (GAO/HEHS-00-35, Jan. 31, 2000). Defense Health Care: Appointment Timeliness Goals Not Met; Measurement Tools Need Improvement (GAO/HEHS-99-168, Sept. 30, 1999). Medicare Subvention Demonstration: DOD Start-up Overcame Obstacles, Yields Lessons, and Raises Issues (GAO/GGD/HEHS-99-161, Sept. 28, 1999). Medicare Subvention: Challenges and Opportunities Facing a Possible VA Demonstration (GAO/T-HEHS/GGD-99-159, July 1, 1999). Medicare Subvention Demonstration: DOD Data Limitations May Require Adjustment and Raise Broader Concerns (GAO/HEHS-99-39, May 28, 1999). Medicare Subvention Demonstration: DOD Experience and Lessons for Possible VA Demonstration (GAO/T-HEHS/GGD-99-119, May 4, 1999). | The Department of Defense's (DOD) Medicare subvention demonstration tested alternate approaches to health care coverage for military retirees. Retirees could enroll in new DOD-run Medicare managed care plans, known as TRICARE Senior Prime, at six sites. The demonstration plan offered enrollees the full range of Medicare-covered services as well as additional TRICARE services, with minimal copayments. During the demonstration period, the program parameters were changed, allowing military retirees age 65 and older to become eligible for TRICARE coverage as of October 1, 2001, and Senior Prime was extended for one year. The demonstration showed that retirees were interested in enrolling in low-cost military health plans and that DOD was able to satisfy its Senior Prime enrollees. By the close of the initial demonstration period, about 33,000 retirees were enrolled in Senior Prime, and more were on waiting lists. When nonenrollees were asked why they did not join Senior Prime, more than 60 percent said that they were satisfied with their existing health coverage; few said that they disliked military care. Although the demonstration had positive results for enrollees, it also highlighted three challenges confronting the military health system in managing patient care and costs. First, care needs to be managed more efficiently. Although DOD satisfied enrollees and gave them good access to care, it incurred high costs. Second, DOD's efforts were hindered by limitations in its data and data systems. Finally, the demonstration illustrated the tension between the military health system's commitment to support military operations and promote the health of active-duty personnel and its commitment to provide care to dependents of active-duty personnel, retirees and their families, and survivors. |
The U.S. currency, reportedly the most widely held in the world, is susceptible to counterfeiting. High foreign inflation rates and the relative stability of the dollar have contributed to the increasing use of U.S. currency outside the United States. Of the $380 billion of U.S. currency in circulation, the Federal Reserve estimates that over 60 percent may be held outside the United States. The widespread use of U.S. currency abroad, together with the outdated security features of the currency, makes it a vulnerable target for international counterfeiting. Excluding two changes introduced in 1990, the overt security features of the currency have not substantially changed since 1929. This situation has resulted in the U.S. dollar’s becoming increasingly vulnerable to counterfeiting. Widespread counterfeiting of U.S. currency could undermine confidence in the currency and, if done on a large enough scale, could even have a negative effect on the U.S. economy. The United States benefits from the international use of its currency. When U.S. currency remains in circulation, it essentially represents an interest-free loan to the U.S. government. The Federal Reserve has estimated that the existence of U.S. currency held abroad reduces the need of the government to borrow by approximately $10 billion a year. The Treasury, including the Secret Service and the Bureau of Engraving and Printing, and the Federal Reserve have primary responsibilities for addressing the counterfeiting of U.S. currency. The Secretary of the Treasury is responsible for issuing and protecting U.S. currency. The Secret Service conducts investigations of counterfeiting activities and provides counterfeit-detection training. The Secret Service is also the U.S. agency responsible for anticounterfeiting efforts abroad. The Bureau of Engraving and Printing designs and prints U.S. currency and incorporates security features into the currency. The Federal Reserve’s role is to distribute and ensure the physical integrity, including the authenticity, of U.S. currency. A diverse group of perpetrators uses a variety of methods to counterfeit U.S. currency. And, although counterfeiting is carried out primarily for economic gain, it is sometimes linked with other more nefarious criminal endeavors, such as drug trafficking, arms dealing, and alleged terrorist activities. According to law enforcement officials, counterfeiters run the gamut from office workers to organized crime and terrorist groups, and the equipment used for counterfeiting U.S. currency ranges from photocopiers to sophisticated offset presses. Moreover, the quality of counterfeit notes varies significantly. Even those notes made using the same method vary according to the sophistication of the perpetrator and the type of equipment used. Of increasing concern is the fact that certain foreign counterfeiters are becoming extremely sophisticated and are now producing very high-quality counterfeit notes that are more difficult to detect than any previous counterfeits. The highest-quality family of counterfeits known today is commonly referred to as the Superdollar. While many allegations have been made about the Superdollar, little evidence in support of these allegations has been made public. In the Middle East, a group, allegedly a foreign government, is said to be sponsoring production of the Superdollar. According to reports by the House Republican Task Force on Terrorism and Unconventional Warfare, the Superdollar is printed in the Middle East on “high-tech state-owned presses with paper only acquired by governments.” Also according to the task force, the Superdollar is “designed for direct infiltration into the U.S. banking system and has become a major instrument in facilitating the flow of militarily useful nuclear materials and equipment and various weapons systems.” A few of the foreign law enforcement and financial institution officials we spoke with believed the Superdollar was being circulated through various terrorist organizations around the world. This belief was primarily based on reports of detections involving individuals with links to terrorist organizations. However, according to the Secret Service, the task force has provided almost no evidence to support its allegations. According to the Treasury, no evidence exists to show that the Superdollar is printed with paper acquired only by governments and that it is designed for direct infiltration into the U.S. banking system. The Treasury also maintained that support for the remaining allegations concerning the Superdollar was inconclusive. Furthermore, although the task force reported that between $100 million and billions of Superdollars are in circulation, the report provided no evidence to support these allegations. Since the Superdollar’s initial detection in fiscal year 1990, Superdollar detections have represented a small portion of total counterfeit currency detections, according to the Treasury and Secret Service. While high-quality counterfeit notes, such as the Superdollar, have received the most attention from the media, Treasury officials told us that their biggest concern was the rapid advances in photographic and printing devices. According to a 1993 National Research Council report requested by the Treasury, the counterfeiting problem will increase as these technologies improve and are made more accessible to the public. The Treasury has planned to combat such counterfeiting through changes to the U.S. currency design, expected to be introduced in March 1996. The criminal nature of the activity precludes determination of the actual extent to which U.S. currency is being counterfeited abroad. The best data available to reflect actual counterfeiting are Secret Service counterfeit-detection data. Using these data, Treasury officials concluded that counterfeiting of U.S. currency was economically insignificant. Secret Service officials told us that they supplemented the counterfeit-detection data that they gathered with intelligence information and field experience and that these data demonstrated an increase in counterfeiting activity abroad. However, our analysis of the same counterfeit-detection data proved inconclusive. Secret Service data have limitations and thus provide only a limited measure of the extent of counterfeiting activities. Foreign officials’ views about the seriousness of the problem of counterfeit U.S. currency were mixed. On the basis of the number of Secret Service counterfeit detections, Treasury officials concluded that counterfeiting of U.S. currency was economically insignificant and thus did not pose a threat to the U.S. monetary system. According to Secret Service and Treasury officials, detected counterfeits represented a minuscule portion of U.S. currency in circulation. Secret Service and Federal Reserve data showed that, in fiscal year 1994, of the $380 billion in circulation, $208.7 million had been detected as counterfeit notes. This figure represented less than one one-thousandth of the currency in circulation. However, while Treasury and Secret Service officials agreed that, overall, counterfeiting was not economically significant, they considered any counterfeiting to be a serious problem. The Secret Service used counterfeit-detection data, supplemented with intelligence information and field experience, to report that counterfeiting of U.S. currency abroad was increasing. In one analysis, it reported that the amount of counterfeit currency detected abroad increased 300 percent, from $30 million in fiscal year 1992 to $121 million in fiscal year 1993, thereby surpassing domestic detections in the same period. The Secret Service has also reported that, in recent years, a larger dollar amount of the notes detected in circulation domestically has been produced outside the United States. Since 1991, the dollar amount of counterfeit U.S. notes detected while in circulation and produced abroad has exceeded the dollar amount of those produced domestically. In fiscal year 1994, these foreign-produced notes represented approximately 66 percent of total counterfeits detected in circulation domestically. The true dimensions of the problem of counterfeiting of U.S. currency abroad could not be determined. The Treasury and the Secret Service use Secret Service counterfeit-detection data to reflect the actual extent of counterfeiting. However, although these data are the best available, they have limitations. Specifically, they are incomplete and present only a partial picture of counterfeiting. If these limitations are not disclosed, the result may be misleading conclusions. First of all, the actual extent of counterfeiting could not be measured, primarily because of the criminal nature of this activity. Secret Service data record only those detections that are reported to the Secret Service; they do not measure actual counterfeiting. As a result, the data provide no information about the number of counterfeiters operating in any given year or the size and scope of their operations. More importantly, these data could not be used to estimate the volume of counterfeit currency in circulation at any point in time. In the case of counterfeit currency appearing abroad, reasons for this include the following: (1) the data do not distinguish between how much counterfeit currency was seized and how much was passed into circulation; (2) the data could not provide information about how long passed counterfeits remained in circulation before detection; and (3) most critically, the data provide no indication of how much counterfeit currency was passed into circulation and not detected. Second, counterfeit detection data may in part only reflect where the Secret Service focuses its efforts. Use of these data thus may not identify all countries with major counterfeiting activity, but simply countries where agents focused their data collection efforts. For example, in fiscal year 1994, almost 50 percent of detections abroad occurred in the six countries where the Secret Service was permanently located. In other countries, counterfeit-detection statistics tend to be more inconsistent. Third, detection data for high-quality notes may be underreported. The Secret Service has said that, because so few Superdollars have been detected, this indicates that there are not many in circulation. However, according to the House Republican Task Force on Terrorism and Unconventional Warfare reports, the majority of Superdollars are circulating outside the formal banking system and therefore would not be reported to the Treasury if detected. Also, as we discovered on our overseas visits, many foreign law enforcement and financial organization officials had inconsistent and incomplete information on how to detect the Superdollar. Thus, financial institutions abroad may be recirculating the Superdollars. Fourth, reported increases in counterfeiting abroad, as supported by Secret Service detection data, may be based on a number of factors other than increased counterfeiting activity. For example, in 1993, the Secret Service changed its reporting practices abroad to be more proactive in collecting counterfeit-detection data. Instead of relying solely on reports from foreign officials, agents abroad began to follow up on Interpol reports and intelligence information in order to collect additional data. Also, according to Treasury officials, foreign law enforcement officials have improved their ability to detect counterfeit U.S. currency and report it to the Secret Service. Furthermore, the increase in domestic detections of counterfeits produced abroad is also subject to interpretation. For example, rather than foreign-produced notes increasing, it is possible that the Secret Service’s ability to determine the source of counterfeit currency has simply improved over time. Fifth and finally, counterfeit-detection data fluctuate over time, and one large seizure can skew the data, particularly for detections abroad. For example, according to the Secret Service, several large seizures accounted for the jump from $14 million in counterfeit detections abroad in fiscal year 1988 to $88 million in fiscal year 1989. The following year, the data indicated a significant drop in detections. For detections outside the United States, the Secret Service has relied heavily on information provided by foreign law enforcement organizations, and has obtained little information from financial organizations. According to Secret Service officials, they supplemented their counterfeiting detection data with knowledge their agents gained through field experience and the sharing of intelligence information. Some of this information was not available or was considered too sensitive for an unclassified report. Our work did yield some information on the unclassified activities. For example, the Secret Service told us that it was conducting vault inspections during its joint international study team visits with Treasury and Federal Reserve officials. According to a Secret Service agent who performs the vault inspections, they include the checking of all U.S. currency in the vault for counterfeits. According to Federal Reserve and Secret Service officials, vault inspections had been conducted in only one of the six locations the Secret Service visited during the time of our review. Secret Service officials told us that the inspections had been conducted only in Argentina and were discontinued because of the limited results obtained there. The officials told us that the inspections might be reinstituted in other countries if it was decided that the effort was warranted. Overseas law enforcement and financial organization officials’ views on the extent of the problem of counterfeit U.S. currency varied. Foreign law enforcement officials tended to be more concerned about counterfeit U.S. currency than foreign financial organization officials. Financial organization officials we met with said that they had experienced minimal chargebacks, and most expressed confidence in the ability of their tellers to detect counterfeits. Furthermore, we heard few reports from foreign financial organization and foreign law enforcement officials about U.S. currency not being accepted overseas because of concerns about counterfeiting. Most foreign law enforcement officials we spoke with believed that the counterfeiting of U.S. currency was a problem, but their opinions on the severity of the problem differed. Swiss, Italian, and Hungarian law enforcement officials said that it was a very serious problem. French and English law enforcement officials said that the problem fluctuated in seriousness over time. And German, French, and Polish officials said that the counterfeiting of U.S. currency was not as serious a problem as the counterfeiting of their own currencies. Some of these law enforcement officials expressed concern over increases in counterfeiting in Eastern Europe and the former Soviet Union. Some also expressed particular worry about their ability, and the ability of financial organizations in their countries, to detect the Superdollar. Conversely, most foreign financial organization officials we spoke with were not concerned about the counterfeiting of U.S. currency. Of the 34 organizations we visited in 7 countries, officials from 1 Swiss and 1 French banking association and 2 Hungarian banks viewed the counterfeiting of U.S. currency as a current or increasing problem. According to other foreign financial organization officials, they were not concerned about U.S. counterfeiting activity because it did not have a negative impact on their business. For example, none of the 16 financial organization officials with whom we discussed chargebacks told us that they had received substantial chargebacks due to counterfeit notes that they had failed to detect. In addition, some of these officials cited other types of financial fraud and the counterfeiting of their own currency as more significant concerns. For example, officials from one French banking association were more concerned with credit card fraud, and officials from two financial organizations in Germany and one financial organization in France said counterfeiting of their country’s currency was a greater problem. Furthermore, foreign financial organization officials we spoke with were confident about their tellers’ ability to detect counterfeits and, in some countries, tellers were held personally accountable for not detecting counterfeits. In most of the countries we visited, detection of counterfeit U.S. currency relied on the touch and sight of tellers, some of whom were aided by magnifying glasses or other simple detection devices, such as counterfeit detection pens. Other counterfeit-detection devices used abroad, like ultraviolet lights, did not work effectively on U.S. currency. While foreign financial organizations appeared confident of their tellers’ ability to detect counterfeits, some of these organizations had incomplete information on how to detect counterfeit U.S. currency, particularly the Superdollar. Finally, foreign financial organization and law enforcement officials provided a few isolated cases in which U.S. currency was not accepted abroad. For example, when it first learned about the Superdollar, one U.S. financial organization in Switzerland initially stopped accepting U.S. $100 notes, although it later resumed accepting the U.S. notes from its regular customers. Also, Swiss police and Hungarian central bank and French clearing house officials reported that some exchange houses and other banks were not accepting $100 notes. We were unable to confirm these reports. However, a State Department official commented that, because drug transactions tended to involve $100 notes, some foreigners were reluctant to accept this denomination, not because of counterfeiting concerns, but rather because of the notes’ potential link to money laundering. The U.S. government, primarily through the Treasury Department and its Secret Service and the Federal Reserve, has been increasing its counterfeiting deterrence efforts. These efforts include redesigning U.S. currency; increasing exchanges of information abroad; attempting to increase the Secret Service presence abroad; and attempting to stop production and distribution of counterfeit currency, including the Superdollar. To combat counterfeiting both domestically and abroad, the Treasury is redesigning U.S. currency to incorporate more security features intended to combat rapid advances in reprographic technology. This change, the most significant in over 50 years, is long overdue, according to some U.S. and foreign officials. The redesigned currency is planned for introduction in 1996 starting with changes to the $100 note, with lower denominations to follow at 9- to 12-month intervals. According to Treasury officials, the currency redesign will be an ongoing process, because no security features are counterfeit-proof over time. These officials also said that the old currency would not be recalled and would retain its full value. Moreover, the Treasury is leading a worldwide publicity campaign to facilitate introduction of the redesigned currency, ensure awareness and use of the overt security features, and assure the public that the old currency will still be accepted in full. Through this campaign, the Federal Reserve hopes to encourage the public to turn in old currency for the redesigned notes. In addition, the Secret Service, through its team visits abroad in company with Treasury Department and Federal Reserve officials, has gathered further information on counterfeiting and provided counterfeit-detection training. As of May 1995, the team had met with law enforcement and financial organization officials in Buenos Aires, Argentina; Minsk, Belarus; London, England; Zurich, Switzerland; Hong Kong; and Singapore. According to Secret Service officials, their visits were successful because they were able to develop better contacts, obtain further information about foreign financial institutions’ practices, learn more about tellers’ ability to detect counterfeits, and provide counterfeit-detection training seminars for both law enforcement and financial organization officials. Since May 1995, the team has taken initial trips to Moscow, St. Petersburg, and Novgorod (Russia); Ankara and Istanbul, Turkey; Cairo, Egypt; Bahrain; Abu Dhabi; Dubai; and Riyadh, Saudi Arabia. Further, the Secret Service has been attempting to increase its presence abroad, although it has encountered difficulties in obtaining approval. The Secret Service has over 2,000 agents stationed in the United States, but it has fewer than 20 permanent positions abroad. The Secret Service first requested additional staff in February 1994 for permanent posting abroad beginning in fiscal year 1996. However, due to uncertainties about the funding of the positions and to other priorities within the Treasury Department, as of June 21, 1995, the Secret Service had secured approval for only 6 of 28 requested positions abroad. After our discussions with the Secret Service, the Treasury, and State, on July 21, 1995, the Treasury approved the remainder of the positions and sent them to the State Department for approval. As of November 30, 1995, the respective State Department chiefs of mission had approved only 13 of the 28 positions, and only 1 agent had reported to his post abroad. The U.S. government has undertaken special efforts to eradicate the highest-quality counterfeit note—the Superdollar. These efforts include an interagency task force led by the Secret Service, an overseas Secret Service task force, and diplomatic efforts between senior policy level officials of the involved countries. Due to the sensitivity and ongoing nature of this investigation, we were made generally aware of these efforts but were not given specific information. In a February 1994 Secret Service request to the Treasury for funding under the 1994 Crime bill, the Secret Service stated that, for the past 4 years, it had spearheaded a multiagency effort to suppress the most technically sophisticated note detected in the history of that agency. According to the request, this initiative has prompted an unprecedented forensic effort, utilizing the resources of the Secret Service, other government offices, and several national laboratories. The efforts of senior policy level officials in the U.S. government involve ongoing diplomatic contacts concerning the Superdollar with Middle Eastern government officials, according to a State Department official. This official said that, in May 1995, our government asked these foreign governments to provide a show of good faith in improving relations by locating the printing plants and perpetrators involved in producing the Superdollar. He added that these efforts did not specifically implicate these governments in the production of the Superdollar, but that, at a minimum, they were believed to be tolerating this illegal activity within their borders. U.S. and Interpol officials we interviewed stated that final resolution of cases similar to that of the Superdollar, should such cases occur, were beyond the purview of law enforcement and would require diplomatic solutions. According to U.S. and Interpol officials, jurisdictional constraints may prevent law enforcement agencies from dealing effectively with cases of foreign-condoned or -sponsored counterfeiting of U.S. currency. In such cases, the Secret Service would only be able to identify and assist in suppressing the distribution of the counterfeit notes. In countries where the United States has no diplomatic relations, U.S. law enforcement has no leverage to help deter counterfeiting. U.S. and Interpol officials agreed that the decision on how to suppress a foreign government-condoned or government-sponsored counterfeiting plant would need to be made at a senior U.S. government level. Mr. Chairman, this concludes my prepared statement. I would be pleased to answer any questions you or the Subcommittee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed U.S. efforts to combat international counterfeiting of U.S. currency. GAO noted that: (1) U.S. currency is vulnerable to international counterfeiting because it is widely used abroad and lacks updated security features; (2) counterfeiters range from office workers to organized crime and terrorist groups using equipment ranging from simple photocopiers to sophisticated offset presses; (3) the U.S. government is particularly concerned about a high-quality counterfeit note known as the "Superdollar" and rapid advances in photographic and printing devices; (4) U.S. agencies' and foreign governments' views on the extent and significance of counterfeit U.S. notes vary, and U.S. counterfeit-detection activities are limited and inconclusive; and (5) to deter international counterfeiting, the Department of the Treasury is redesigning U.S. currency to incorporate more security features, the Secret Service has gathered additional information on counterfeiting and provided counterfeit-detection training, and the U.S. government is using international and interagency task forces and diplomatic efforts to eradicate the Superdollar. |
When BLM or the Forest Service estimates a parcel’s fair market value, they generally obtain an appraisal that complies with federal appraisal standards. According to the standards, fair market value is defined as the amount for which a property would be sold—for cash or its equivalent— by a willing and knowledgeable seller with no obligation to sell, to a willing and knowledgeable buyer with no obligation to buy. The standards require an appraiser to first identify the property’s “highest and best use,” which is defined as the use that is physically possible, legally permissible, financially feasible, and maximally profitable for the owner. The appraiser must estimate the property’s value using at least one of three approaches: (1) the sales comparison approach, which compares the property with other properties that have been sold; (2) the income approach, which applies a capitalization rate to the property’s potential net income; or (3) the cost approach, which adds the estimated value of the land to the current cost of replacing any improvements (such as buildings). The sales comparison approach is generally considered to be the most reliable when sufficient market data are available; it considers various factors—such as the location, size and other physical characteristics, and uses of the properties—to estimate the extent of comparability between the property being appraised and the comparable properties. On the basis of the prices of the properties that are judged to be the most comparable, the appraiser then estimates the fair market value of the property being appraised. Federal appraisal standards generally address appraisal procedures and documentation rather than outcomes. The standards explicitly allow for the application of professional judgment in estimating a property’s fair market value: “The appraiser should not hesitate to acknowledge that appraising is not an exact science and that reasonable men may differ somewhat in arriving at an estimate of the fair market value.” Before either agency uses an appraised value, an agency appraiser must review the appraisal report, assure it complies with federal appraisal standards, and approve it for agency use. Four key statutes authorize BLM to transfer land. Under these statutes, BLM transferred about 79,000 acres for about $3 million during fiscal years 1991 through 2000: about 13,000 acres under the Desert Land Act; about 42,000 acres under the Recreation and Public Purposes Act (R&PPA); about 4,000 acres under the Color-of-Title acts; and about 20,000 acres under the Southern Nevada Public Land Management Act (SNPLMA). The Forest Service did not transfer land during our study period, although it recently received authority to do so under the Education Land Grant Act. Enacted in 1877, the Desert Land Act authorizes BLM to transfer arid western land to applicants who have made efforts to reclaim, irrigate, and cultivate it, for $1.25 per acre. Applicants must first identify such land— limited to 320 acres per application—as suitable for agriculture and incapable of producing crops without irrigation; in addition, the land generally cannot have minerals or timber. Among other requirements, applicants must hold a legal right to the water they plan to use for irrigation and prove that they have expended at least $3 per acre in reclamation, irrigation, and cultivation. According to BLM, identifying federal land that could be acquired under this statute is now difficult for several reasons, including the following: most of the arid western land that is suitable for agricultural development is now privately owned, the amount of water available for irrigation is now limited, and the costs of developing irrigation projects are now high—roughly $250,000 for a 320-acre parcel. In addition, the application process is time-consuming for applicants and agency officials, sometimes taking 10 or more years to complete. Under the Desert Land Act, BLM transferred about 13,000 acres from fiscal year 1991 through fiscal year 2000 and received about $15,000. Figure 1 shows the acres transferred under the Desert Land Act, and the amount received, annually for this 10-year period. The number of acres transferred annually ranged from about 300 acres in fiscal year 1999 to about 3,100 acres in fiscal year 1996. R&PPA authorizes BLM to transfer land to state governments, local governments, and nonprofit organizations, if the land will be developed and used for recreational or public purposes, upon application from any of these entities. Prices for this land are set in the act or by the agency at less than fair market value, depending on the type of entity applying and the purpose for which the land will be used, as shown in table 1. Before BLM transfers land to an applicant under R&PPA, the agency is authorized and generally prefers to enter first into a multi-year lease with the applicant. Such leases help the agency to assure that applicants develop their proposed projects as planned and in a timely manner. When BLM transfers land under R&PPA, the agency restricts the deed to require that the parcel continue to be used for the stated purpose and not be transferred to another owner without BLM’s consent. If these deed restrictions are violated, BLM generally requires that the owner take corrective action—such as returning the parcel to its stated purpose—or transfer the parcel back to BLM. In cases when the owner wants to continue to use the parcel in a way that violates the deed restrictions, BLM often agrees to take back the parcel and then sell it to the former owner for its current appraised value. To assure that the deed restrictions are met, BLM’s policy is that field offices should visually inspect each transferred parcel at least once every 5 years. For example: BLM field office staff inspected a 160-acre parcel in Idaho that had been transferred to a nonprofit group to develop and use as a trap-shooting and rifle range. In their inspection, staff found an occupied trailer, an abandoned car, miscellaneous garbage, and weeds growing in the area that was to be cleared—all in violation of the deed restriction. BLM staff subsequently directed the group to take specific actions to correct the violations. Field office representatives told us that limited staff resources and higher work priorities preclude them from inspecting all transferred parcels every 5 years. BLM’s automated public lands database shows that field office staff visited only about 40 percent of the 277 parcels with restricted deeds that were transferred in the period from fiscal years 1991 through 2000; monitoring visits were done (or scheduled) at intervals longer than 5 years for some parcels but were not done (or scheduled) at all for other parcels. To address this problem, BLM offices have adopted alternative approaches: for example, one field office hired a summer intern to inspect transferred parcels; another requested additional funds to hire contractors to do these inspections; yet another used a volunteer. However, BLM has not assessed the feasibility of other less costly approaches, such as requiring owners of transferred parcels to document their compliance with deed restrictions by submitting periodic reports and/or photographs of their land; field office staff could review these documents and inspect parcels as needed—for example, if the documents were not submitted on time or appeared to show noncompliance. Under R&PPA, BLM transferred about 42,000 acres during fiscal years 1991 through 2000 and received a total of about $2.6 million: about 22,000 acres were transferred to state or local governments for historic monument or recreation purposes (at no cost); about 17,000 acres to state or local governments for other government-controlled purposes that serve the general public (for the greater of $10 per acre or $50 total); and the remaining approximate 3,000 acres to state or local governments or nonprofit organizations for other public purposes (for a percentage of the appraised value). Figure 2 shows the acres transferred under R&PPA, and the amount received, annually for fiscal years 1991 through 2000. The number of acres transferred under R&PPA ranged from about 900 in fiscal year 1993 to about 6,500 in fiscal year 1995. The amount received from R&PPA transfers ranged from about $9,000 in fiscal year 1993 to almost $1.6 million in fiscal year 1999. The significant increase in that year was due primarily to transfers of three parcels in Las Vegas, Nevada, to churches, at 50 percent of their appraised values—one parcel yielded over $250,000 and two parcels yielded more than $500,000 each. BLM is also authorized to transfer land under the Color-of-Title Act and several other laws that the agency collectively refers to as the Color-of- Title acts. Most of the land is transferred under Class I claims made under the Color-of-Title Act. Class I claims are those made by applicants who have valid reasons to believe that they already owned the land. Applicants must identify the land (limited to 160 acres per applicant), show that they (or their ancestors or the prior owners) held title to the land for more than 20 years without knowing it was in fact federally owned, and placed valuable improvements on the land or cultivated it. By law, the price of the land is based on the appraised value—net of the value of improvements or development—that is discounted to reflect an applicant’s equities; if this calculation results in a price below $1.25 per acre, the minimum price is set at $1.25 per acre. The Color-of-Title Act does not define “equities.” However, under BLM’s policy, determining an applicant’s equities may include considering factors such as the longevity of the applicant’s claim, whether the applicant paid fair market value for the land, the origin of the errors that initiated the chain of title, and whether taxes on the land have been paid. BLM has no guidance on how to quantify these factors or use them to set prices. In the absence of such guidance, field offices have developed inconsistent practices that, ironically, can lead to inequities in the prices paid by applicants. Such inconsistencies were noted by the Interior Board of Land Appeals in a 1984 decision that criticized the department for failing to provide or require a specific approach for estimating applicants’ equities. The Board found that a BLM field office had erroneously used the date of the application—rather than the earlier date that the applicant became aware the land was federally owned—to determine the longevity of the claim. The Board pointed out that using the later application date had the effect of increasing the equities for applicants who defer filing their claims, while punishing applicants who took immediate steps to resolve their claims. Despite this decision, the same field office made the same error when resolving a claim in 1999; field office representatives told us that they were unaware of the Board’s decision on this issue. BLM did not always use appraised value as a basis for determining the price of land to be transferred under the Color-of-Title authority, generally because field office representatives thought that the appraisals would have cost more than the land was worth. In addition, BLM did not consistently apply the Color-of-Title eligibility requirements. For example: BLM transferred 26 acres in Nebraska to a power company, using the parcel’s value as assessed by the county for tax purposes, which was $164 per acre, rather than obtaining an appraisal. The field office representative who decided to use the assessed value told us that he did not know that the company had stated in its application that it had paid about $1,000 per acre for the land in 1977. After deducting the applicant’s equities from the assessed value, BLM determined that the minimum price would apply. BLM transferred 2.2 acres that abutted the backsides of 15 residential and other lots near Ruby Ridge, Idaho, for the minimum price rather than obtaining an appraisal. BLM transferred 1.4 of the 2.2 acres to individuals who were ineligible: three landowners had recently bought second parcels and were aware of the unclear titles, and four individuals had stated in their applications that their parcels had no valuable improvements. BLM field office representatives saw this situation as an opportunity to work cooperatively with the community—the Color-of-Title issue had become locally well known and contentious—and to demonstrate that a federal agency could be a good neighbor. Under the Color-of-Title acts, BLM transferred about 4,000 acres during fiscal years 1991 through 2000 and received a total of about $300,000. Figure 3 shows the acres transferred under these acts, and the amount received, annually for fiscal years 1991 through 2000. Enacted in 1998, SNPLMA authorizes BLM to transfer specific parcels of federal land around the McCarran International Airport in Clark County, Nevada, to the county government at no cost. The law also allows the county to sell or lease this land at fair market value. If the county does sell or lease this land, 85 percent of the gross proceeds are deposited into the Treasury for Interior to use for such purposes as acquiring environmentally sensitive land in Nevada; developing or improving parks, trails, and natural areas in the county; developing a multi-species habitat conservation plan in the county; and reimbursing BLM for any administrative costs incurred by its local offices related to sales under this act. In addition, SNPLMA authorizes BLM to make other transfers, such as land to Clark County for a youth activities facility and land to the state or local governments for affordable housing. Under SNPLMA, BLM transferred about 20,000 acres in fiscal years 1999 (the first fiscal year covered by the act) and 2000, receiving zero value. However, according to BLM officials, the agency has received about $18 million from Clark County for transferred land that the county subsequently sold or leased. Enacted in December 2000, the Education Land Grant Act authorizes the Forest Service to transfer land upon application—up to 80 acres per application, but no more than reasonably necessary—to public school districts to use for educational purposes under certain circumstances.The statute requires the land to be transferred at a nominal cost, and Forest Service representatives told us that they are still considering how they will determine the cost. In addition, the law requires that transferred land continue to be used for the stated purpose and remain in the applicant’s ownership or else be transferred back to the Forest Service. The Forest Service did not transfer any land under this statute during our review period. Both BLM and the Forest Service are authorized to sell land. BLM sold about 56,000 acres in the period extending from fiscal year 1991 through fiscal year 2000, and received about $74 million, under three key statutes: about 55,000 acres under the Federal Land Policy and Management Act (FLPMA), about 600 acres under the Santini-Burton Act, and about 100 acres under SNPLMA. In July 2000, the Federal Land Transaction Facilitation Act was enacted, which authorizes BLM to use the proceeds when it sells land under FLPMA. The Forest Service, in contrast, sold about 2,000 acres during this same 10-year period, and received about $5 million, under two key statutes: about 800 acres under the Townsite Act and about 1,200 acres under the Small Tracts Act. In addition, the Congress recently authorized the Forest Service to competitively sell specific parcels in certain forests—and to use the proceeds for specific purposes—under several statewide forest improvement acts, but the agency did not sell land under these statutes during our review period. FLPMA authorizes BLM to sell land that the agency has determined through its land-use planning process to be (1) difficult and uneconomic to manage, (2) no longer required for any federal purpose, or (3) better serving public objectives if it were not federally owned; assuming other criteria are met as well. Buyers must meet several requirements and BLM must receive at least fair market value of the land; under BLM’s regulations, fair market value is estimated by appraisals that meet federal appraisal standards and are reviewed and approved by the agency. All sale proceeds are deposited into the Treasury. Under this act, the agency must offer the land for sale under competitive bidding procedures—public auction—unless specific equity or public policy considerations support noncompetitive procedures. For example, BLM could decide to give preference to the state or local government in which the parcel is located, a parcel’s current user, an adjoining landowner, or another person. Aside from public auctions, sales are generally requested by potential buyers who apply to buy specific parcels. The agency can respond to such applications by (1) using modified competitive bidding procedures—i.e., offering the parcel for sale under competitive bidding procedures and allowing the applicant to match the highest bid received—or (2) selling directly (noncompetitively) to the applicant. Under FLPMA, BLM sold about 55,000 acres during fiscal years 1991 through 2000 and received about $38 million. Figure 4 shows the acres sold under FLPMA, and the amount received, annually for fiscal years 1991 through 2000. Over this 10-year period, BLM sold about 24,000 acres (roughly 45 percent of the total) in 336 competitive sales. The remaining 31,000 acres (about 55 percent of the total) were sold in 495 noncompetitive sales. Figure 5 shows the number of competitive and noncompetitive sales BLM made under FLPMA annually during this same 10-year period. The Santini-Burton Act authorizes BLM to sell land—up to 700 acres per year—that is located within a defined area of Las Vegas, Nevada, to allow for more orderly community development. When BLM sells this land, it must follow FLPMA requirements and receive at least fair market value.The law reserves 85 percent of the sale proceeds, which are deposited into the Treasury, to be used to repay the Forest Service for acquiring environmentally sensitive land around Lake Tahoe. Under the Santini-Burton Act, BLM sold about 600 acres during fiscal years 1991 through 2000 and received about $27 million; of this acreage, all but 20 acres were sold competitively. About three-quarters of the land was sold in fiscal year 1991 for about $16 million, and there were no reported sales in fiscal years 1994, 1996, 1999, or 2000. SNPLMA authorizes BLM to sell additional land—about 27,000 acres— within a defined area of Las Vegas, Nevada. When BLM sells this land, it must follow FLPMA requirements and receive at least fair market value. The law reserves 85 percent of the sale proceeds, which are deposited into the Treasury, for Interior to use for such purposes as acquiring environmentally sensitive land in Nevada; developing or improving parks, trails, and natural areas in the county; developing a multi-species habitat conservation plan in the county; and reimbursing BLM for any administrative costs incurred by its local offices related to sales under this act. Under SNPLMA, BLM sold about 100 acres in fiscal year 2000—all in competitive sales—and received about $10 million. The Federal Land Transaction Facilitation Act, enacted in July 2000, authorizes the secretaries of Agriculture and the Interior to use the proceeds from selling certain BLM land. In selling this land—which must be located outside of the Las Vegas area of Clark County, Nevada—BLM must follow FLPMA requirements and receive at least fair market value. Sale proceeds are deposited into the Treasury and may be used to buy inholdings—nonfederal land or land interests that lie within the boundary of federally designated areas such as national parks or wildlife refuges— and other nonfederal land that is adjacent to such areas and has exceptional resources. At least 80 percent of the proceeds must be used to buy inholdings, and at least 80 percent of the proceeds generated by selling land in a state must be used within that state. The Townsite Act authorizes the Forest Service to sell land in certain western states to local governments for community purposes, upon application. The application must be for no more than 640 acres and the land must lie adjacent to the community that has applied to buy it. The Forest Service must determine that the sale will serve community objectives—such as expanding existing economic enterprises, public schools, public health facilities, and recreation areas for local citizens— and that these local objectives outweigh public objectives that may be served by retaining federal ownership. In addition, the agency must receive at least fair market value; under its regulations, this value is estimated through appraisals that meet federal appraisal standards. Under the Townsite Act, the Forest Service sold about 800 acres— nine parcels—during fiscal years 1991 through 2000 and received about $3 million. The Small Tracts Act authorizes the Forest Service to sell certain small parcels—if their value does not exceed $150,000—to applicants, if the sale is not practicable under any other authority. The land must also be: (1) interspersed with or adjacent to land that was transferred out of federal ownership under the mining laws and is no larger than 40 acres (termed “mineral survey fractions”); (2) encroached upon by entities who believed in good faith that they owned the land and mistakenly improved it and is no larger than 10 acres (termed “encroachments”); or (3) a road right-of-way that is substantially surrounded by nonfederal land and not needed by the federal government. When the Forest Service sells this land, by law the agency must receive at least equal value; by regulation, equal value is defined as the appraised value and appraisals must meet federal appraisal standards. The Forest Service’s regulations allow the agency to competitively sell mineral survey fractions and road rights-of- way, if an adjoining landowner has not applied to buy them. However, all completed sales under this authority in calendar year 1999 were direct sales to applicants. The agency did not always follow the law’s requirements. For example: An individual applied to the Forest Service to buy 0.4 acres in California, after a new land survey revealed that he had mistakenly built a trailer pad and shed on a national forest. The applicant then sold the land to another individual who continued the application process, and the Forest Service sold the land to the second individual. Furthermore, the Forest Service did not appraise the parcel; instead, a staffmember who was not an appraiser estimated its value at $275 on the basis of prices of recently sold properties. Under the Small Tracts Act, the Forest Service sold about 1,200 acres during fiscal years 1991 through 2000 and received about $2 million. Figure 6 shows the acres sold under the Small Tracts Act, and the amount received, annually for fiscal years 1991 through 2000. The acres sold under the Small Tracts Act has declined since fiscal year 1992. According to Forest Service officials, the agency prefers to exchange land under this act rather than sell it, resulting in more frequent exchanges in recent years. These laws authorize the Forest Service to competitively sell specifically identified properties. The identified lands may have improvements (such as buildings) and are, according to Forest Service officials, typically properties that the agency no longer needs. These laws generally allow the Forest Service to accept cash, other land, existing improvements, or improvements constructed to Forest Service specifications as consideration. Cash proceeds are deposited into the Treasury and the Forest Service may use them for specific purposes, which are often identified in the authorizing act. For example, the Texas National Forest Improvement Act of 2000 authorizes the Forest Service to offer for competitive sale, nine specific parcels totaling about 38 acres, using the sale proceeds to acquire, construct, or improve administrative facilities for national forests in Texas or to acquire other land or land interests in Texas. Similarly, the Arizona National Forest Improvement Act of 2000 authorizes the Forest Service to competitively offer several parcels totaling more than 550 acres and to use the proceeds to acquire, construct, or improve administrative facilities in national forests in Arizona or to acquire other land or land interests in Arizona. When BLM offered land under competitive bidding procedures, the agency often received prices above the appraised values. The Forest Service did not offer land for sale under competitive bidding procedures. Moreover, some land that BLM sold directly to applicants may have been appropriate to sell competitively. When BLM and the Forest Service sold land noncompetitively, the agencies generally used appraised values as sale prices; however, the appraisals sometimes underestimated the parcel’s fair market value because they did not reflect the buyer’s current or planned use of the land. In a few direct sales, BLM accepted less than the appraised value, although it had no authority to accept less than fair market value. BLM offered land for sale competitively—either through public auctions or modified competitive sales—when agency personnel believed that there might be more than one potential buyer. In these competitive sales, the agency used the appraised value as the minimum acceptable bid. From fiscal years 1991 through 2000, BLM sold about 24,000 acres in competitive sales and received $6 million—or 18 percent—more than appraised values. Similarly, in calendar year 1999, BLM sold about 1,900 acres in 22 competitive sales under FLPMA and received about 20 percent more than appraised values. For example: BLM auctioned about 200 acres under SNPLMA in November 1999 and sold about 100 acres for a total of about $10 million—about 21 percent more than the total appraised value. The agency did not receive bids on the remaining acres and did not sell them at this auction.In an effort to resolve a trespass situation in California, in which a desert mining camp established in the 1880s had evolved into a small town, BLM sold the residential lots to the current homeowners or mining claimants for their appraised value of about $500 per acre. BLM subsequently solicited competitive bids on 12 lots not previously sold to homeowners and sold 5: 2 for their appraised value, an 8-acre lot for $1,000 per acre, a 1.4-acre lot for $3,300 per acre, and a 0.5-acre lot for $4,000 per acre. BLM received about $9,400—or 60 percent—more than the total appraised value. A city in Nevada applied to buy 40 acres for a highway beautification project. BLM decided to offer the land for competitive sale and allow the city to match the highest bid; the agency received a high bid of $1.2 million—about 18 percent more than the appraised value—which the city then matched. An adjacent landowner applied to buy 80 acres in Oregon that he had been leasing to graze. BLM decided to offer the land for competitive sale and allow the landowner to match the highest bid; the agency received a high bid of $20,600—about 78 percent more than the appraised value—which the landowner then matched. Of the 28 parcels that BLM sold noncompetitively under FLPMA in calendar year 1999, at least 11 parcels might have been more appropriately offered for sale competitively. These parcels (in whole or in part) had no continuing current authorized use or improvements, and other potential buyers—such as adjacent landowners—may have been interested in them. BLM could have used competitive bidding procedures and sold to the highest bidder or used modified competitive bidding procedures to allow the applicant to match the highest bid; if there were no other bidders, the applicant would have paid the appraised value. BLM did not offer these parcels for sale competitively, however, because agency representatives assumed there were no potential buyers other than the applicants. For example: A nonprofit organization applied to buy 40 vacant acres in Colorado to use as a church camp. The parcel did not have public access, because it was surrounded by land owned by the nonprofit organization. However, the appraisal reported that similarly inaccessible parcels had recently been sold and these new owners had subsequently acquired access rights. Furthermore, the parcel was located in a recreational area—near a ski resort, a national park, and other tourist attractions—where property values were rapidly rising. Although BLM sold the parcel noncompetitively for its appraised value of $126,000, the agency planned to offer it to the public had this sale not been completed—an indication that the parcel might have been more appropriately offered for competitive sale. BLM had allowed a 1¼-mile recreational railroad associated with a tourist attraction in Arizona to be partially built on public land; however, the field office later determined that it had improperly done so. To resolve the situation, BLM sold the developer 40 acres—land under the track plus land extending out to the adjacent landowners’ surrounding properties and the highway. In appraising the 40-acre parcel at an average of $3,700 per acre, the appraiser determined the acres fronting the highway to be more valuable because they could be commercially developed. At a minimum, these frontage acres might have been more appropriately offered for competitive sale. Under the Small Tracts Act, the Forest Service’s regulations allow mineral survey fractions and road rights-of-way to be sold competitively unless an adjacent landowner has applied to buy them. Of the 27 parcels that the Forest Service sold under the Small Tracts Act in calendar year 1999, 10 were mineral survey fractions or road rights-of-way; 7 of these parcels might have been more appropriately offered for sale competitively if not for the exception to competitive sales in this regulation. These parcels (in whole or in part) had no continuing current authorized use or improvements, and other potential buyers—such as other adjacent landowners—may have been interested in them. For example: A private landowner in Montana discovered that he had mistakenly built his residence—a house and garage—on the forest. Although this was an encroachment situation, the Forest Service treated it as a mineral survey fraction because the encroached area was a small part of an 8-acre mineral survey fraction. The landowner applied to buy all 8 acres or any reasonable portion of the parcel. Other private parties also owned land adjacent to the 8-acre parcel and might have been potential buyers for the unoccupied portion. Instead of selling the applicant only the encroached area, and selling the remaining land competitively, the Forest Service sold all 8 acres to the applicant for the appraised value of $8,600. When BLM and the Forest Service sold land directly (noncompetitively) to applicants—usually the current user of the land or the adjacent landowner who planned to use it for a specific purpose—the agencies generally used the appraised value as the sale price. In several of these sales, the appraisal underestimated the parcel’s fair market value because it did not reflect the buyer’s current or planned use of the land. BLM sold several parcels directly to buyers for the development of various enterprises, including a landfill, a prison facility, and a sod and tree farm. The appraisals for some of these parcels did not consider the planned use in determining the parcel’s highest and best use (defined as the use that is physically possible, legally permissible, financially feasible, and maximally profitable) but instead determined the highest and best use to be something else. When the planned use of a parcel is reasonably probable, it should be considered in determining the property’s highest and best use, according to federal appraisal standards. In these appraisals, the buyer’s current use of the parcel or adjacent land diminished the value of the parcel for other uses and reduced the appraised value. In effect, the parcel was appraised as though it could be bought by someone other than the applicant—although BLM had already determined that no other potential buyers existed—and would be used for a purpose other than the current or planned use. For example: BLM had for years leased to a city in Arizona the rights to mine sand and gravel on an 80-acre parcel, next to the city’s landfill. After the city depleted the sand and gravel deposit—excavating about 70 acres to a depth of 35 to 45 feet—it applied to buy the parcel to expand its landfill. Despite this plan, the appraiser determined that the parcel’s highest and best use was for recreation (e.g., a skateboard park) or light industrial purposes (e.g., a construction yard) on the unexcavated acres. The appraiser determined that the 70-acre pit contributed no value for these purposes and appraised the remaining land at $7,500 per acre; the 80-acre parcel sold for its appraised value of $75,000. A city in New Mexico applied to buy a 120-acre parcel to develop as a sod and tree farm, which the city planned to irrigate with reclaimed water from its adjacent sewage treatment plant. Despite this plan, the appraiser determined that the parcel’s highest and best use was for cattle grazing, finding that a bad odor from the city’s sewage treatment plant diminished its value for any other use. The review appraiser noted the absence of fencing and demand for such a small piece of grazing land and also noted that the city’s planned use should have been addressed in the appraisal. The reviewer found that the appraisal was not entirely complete by federal appraisal standards and included additional data in his review; with the additional data, he determined that the appraisal could be followed to a reasonable conclusion of value, which was $50 per acre. The parcel sold for its appraised value of $6,000. Both BLM and the Forest Service sold several parcels directly to adjacent homeowners who had mistakenly built part of their residences on federal land. The appraisals for some of these parcels did not consider the actual size of the adjacent residential lot to which the parcel would be added but instead assumed a larger lot size. The Forest Service has a policy to make such an assumption in appraising land to be sold under the encroachment provision of the Small Tracts Act, assuming the average size of the parcels used as comparable sales. BLM has no such policy but sometimes made similar assumptions. This assumption tends to reduce the per-acre appraised value: if other factors are equal, larger parcels tend to have lower per-acre values than smaller parcels. For example: BLM sold 0.3 acres in Oregon to an adjacent homeowner who had mistakenly built a well and pumphouse on public land. In 1998, the appraiser assumed the parcel was part of the homeowner’s 8-acre lot and valued the land at $12,000 per acre (or $4,000 for the parcel). After the homeowner told BLM that he could not afford to pay this price, in 1999 BLM again appraised the parcel. This time, the appraiser assumed the parcel was part of a larger (40-acre) parcel and told us he did so to establish a lower appraised value. The second appraisal valued the land at $3,000 per acre (or $1,000 for the parcel)—75 percent less than the 1998 appraised value. The Forest Service sold 0.4 acres in New Mexico to an adjacent homeowner who had mistakenly built part of her residence on the forest. The appraiser assumed the parcel was part of a 130-acre parcel rather than the homeowner’s 66-acre lot. The per-acre sale prices of the comparable properties ranged from $3,100 per acre (for the smallest 20-acre parcel) to $1,050 per acre (for the largest 270-acre parcel). The appraiser valued the hypothetical 130-acre parcel within this range, at $2,000 per acre, and appraised the 0.4-acre parcel at $720. In three direct sales that were completed in calendar year 1999 under FLPMA, BLM accepted prices that were below appraised values. Two of these sales were made to resolve trespasses that posed difficult management situations, such as when a trespasser’s continued use of federal land was likely to become very costly for the agency to otherwise address. While these decisions may have been cost-effective in the long run, FLPMA and its implementing regulations direct BLM to receive at least fair market value—as estimated by appraised value—when it sells land. BLM representatives told us that the field offices probably “stretched” their authority in resolving these trespasses but needed some flexibility to address such situations. They further said that the agency could receive authority to sell a parcel at less than fair market value by obtaining special legislation that applies only to the specific case. For example: Several years ago an individual occupied mining claims and a millsite on public land in the California desert and had also moved onto adjacent public land without authority. BLM disputed the legitimacy of his occupancy but was unsuccessful in ending the trespass. To resolve the situation, BLM sold him 40 acres—the acres he had occupied plus land that extended out to the adjacent landowners’ surrounding properties— which had been appraised at $60,000. After he told BLM that he could only afford to pay $24,000—60 percent less than appraised value—BLM accepted his offer. Many years ago BLM transferred 35 acres to an Arizona county under R&PPA to use as a cemetery. BLM received complaints regarding the county’s operation of the cemetery and found that bodies had been buried on 3 adjacent acres of public land without authority and that the county was unwilling to take corrective action. To resolve the situation, BLM agreed to take back the parcel and to sell the county 56 acres: the original 35 acres, the 3 trespassed acres, and another 18 adjacent acres for expansion. The appraiser determined that the 38 acres (with bodies) contributed no value and assessed the 18 acres to be worth $17,000. The county told BLM that it was unwilling to pay $17,000 and offered to take the 38 acres that had no value. In response, BLM noted that all land has some value and instead charged the county a “minimum transaction value” of $2,000 for the 38 acres. When BLM and the Forest Service sell land, they generally seek fair market value, as estimated by an appraisal. When BLM sells land competitively, the agency has the opportunity to test the reliability of such estimates in the open market, capture additional buyers’ motivations if present in that market, and enhance federal revenues by receiving higher prices. As a result, BLM has received about $6 million above the appraised values during the past decade. In contrast, when BLM or the Forest Service sells land directly (noncompetitively), they must rely on appraised values to set sale prices; the agencies have no process to test the reliability of appraised values or to seek higher prices if those values are understated. Of 38 direct sales we examined, about half might have been more appropriately offered for competitive sale because there might have been potential buyers other than the applicants, such as adjacent landowners. Furthermore, the appraised value of parcels that are sold directly may underestimate their fair market value—for example, the land’s current or planned use may have diminished its value to other entities while increasing its value to the applicant—and a higher price may be warranted if the agencies are to receive fair market value in noncompetitive sales. In our view, federal revenues could be enhanced if both agencies used competitive sales more frequently and sought higher prices in their direct sales. BLM faces additional challenges in selling and transferring land. FLPMA requires that BLM receive at least fair market value when it sells land, but the agency has sold land for less than its appraised value—which estimates fair market value—in response to specific circumstances despite having no authority to do so. BLM transferred the most land under the authority of R&PPA—42,000 acres—but did not inspect most of these parcels to assure that deed restrictions were met, due to limited resources and higher priorities; the agency has not fully evaluated less costly means of inspecting these parcels. BLM transferred the least land under the Color-of-Title acts—4,000 acres—but did not consistently determine applicants’ eligibility or use appraisals for these parcels and has no guidance on quantifying applicants’ equities; as a result, these acts are inconsistently applied across the nation. We recommend that the Chief of the Forest Service, to enhance federal revenues, take the following actions when the agency sells land: change regulations implementing the Small Tracts Act to allow competitive sales of mineral survey fractions and road rights-of-way— either through public auctions or modified competitive bidding procedures—even if an adjacent landowner has applied to buy the parcel; require that these parcels be sold competitively unless field offices specifically demonstrate why they should be sold noncompetitively; and when selling land directly to applicants, require that appraisals consider the parcel’s current or planned use in determining its highest and best use, whether it is to be developed for an enterprise or added to an adjacent landowner’s property. Similarly, we recommend that the Director of BLM, to enhance federal revenues, take the following actions when the agency sells land: require competitive sales unless field offices specifically demonstrate why a parcel should be sold noncompetitively; and when selling land directly to applicants, require that appraisals consider the parcel’s current or planned use in determining its highest and best use, whether it is to be developed for an enterprise or added to an adjacent landowner’s property. Furthermore, we recommend the Director of BLM take the following additional actions: when the agency faces specific circumstances it believes warrant selling a parcel for less than its appraised value, obtain special legislation applying to the specific case that authorizes the agency to do so; assess the feasibility of less costly methods of monitoring parcels transferred under R&PPA, such as requiring entities that have acquired these parcels to periodically self-report on their compliance with deed restrictions; and develop policy and procedures for Color-of-Title applications, to provide consistency in proving applicants’ eligibility, estimating applicants’ equities, and using appraisals. We provided copies of this report to the Departments of the Interior and Agriculture; however, neither department provided comments. We conducted our review from September 2000 through August 2001 in accordance with generally accepted government auditing standards. Details of our scope and methodology are discussed in appendix I. We are sending copies of this report to the Secretary of the Interior, the Director of the Bureau of Land Management, the Secretary of Agriculture, the Chief of the Forest Service, and interested congressional committees. We will also provide copies to others on request. If you or your staff have any questions, please call me at (202) 512-3841. Key contributors to this report are listed in appendix II. To determine the key statutes under which the BLM and the Forest Service transferred or sold federal land during fiscal years 1991 through 2000, we obtained data from BLM’s automated public lands database of nationwide land statistics (referred to as LR2000), from BLM’s annual Public Lands Statistics, and from the Forest Service’s centralized database of nationwide land statistics. As agreed with the requester’s office, we excluded transfers authorized to various states under the terms of their statehood, transfers and sales authorized to resolve Native and Indian land claims, and sales of mineral rights (if they were sold separately from land rights). Based on the preponderance of the remaining reported transfers and sales, we then identified the key statutes; we discussed our selection of these statutes with officials in the agencies’ Washington Offices. To describe the transactions made under these key statutes, we obtained and analyzed data on the acres and dollar values of land transferred and sold annually during fiscal years 1991 through 2000. To identify the requirements for transferring and selling land under these key statutes, we reviewed the laws and the associated regulations, policies, and procedures that were established by the agencies. To determine whether the agencies were meeting these requirements, we examined all 186 transactions—107 transfers and 79 sales—that the agencies reported completing in calendar year 1999 under these key statutes, as summarized in table 2. For each of these transactions, we reviewed the complete case file or obtained key documents from the case file, and discussed the documents and our analyses with agency representatives in the cognizant field offices in the following locations: BLM’s Arizona State Office (Phoenix, Arizona), California State Office (Sacramento, California), Colorado State Office (Lakewood, Colorado), Eastern States Office (Springfield, Virginia), Idaho State Office (Boise, Idaho), Montana State Office (Billings, Montana), Nevada State Office (Reno, Nevada), New Mexico State Office (Santa Fe, New Mexico), Oregon State Office (Portland, Oregon), Utah State Office (Salt Lake city, Utah), Wyoming State Office (Cheyenne, Wyoming), and various field offices that are under the administrative jurisdiction of these state offices; and the Forest Service’s Region 1 Office (Missoula, Montana), Region 2 Office (Lakewood, Colorado), Region 3 Office (Albuquerque, New Mexico), Region 5 Office (Vallejo, California), Region 6 Office (Portland, Oregon), Region 8 Office (Atlanta, Georgia), Region 9 Office (Milwaukee, Wisconsin), and various forest offices that are under the administrative jurisdiction of these regional offices. We also reviewed the extent to which BLM complied with its policy to inspect parcels that had been transferred under the Recreation and Public Purposes Act every 5 years. Using BLM’s public lands database, we determined whether, as of January 2000, inspections for those parcels that had been transferred under this authority during fiscal years 1991 through 2000 had been (1) scheduled and completed as scheduled, (2) scheduled but not completed as scheduled, or (3) not scheduled. To assess whether the agencies received the appraised value when they sold land, we reviewed federal appraisal standards and examined all 61 appraisals that were completed for parcels that were sold in calendar year 1999: 44 for land sold by BLM under FLPMA, and 17 for land sold by the Forest Service. To analyze the difference in prices between competitive and noncompetitive sales completed by BLM during fiscal years 1991 through 2000, we identified the parcels that were sold under each procedure, and identified the appraised value and the sale price, using available information from BLM’s public lands database, the agency’s website, and the Federal Register. We also contracted with Mr. Peter D. Bowes—an independent and certified appraiser in Denver, Colorado, who has over 40 years of experience in appraising properties and has worked with various government entities—to provide his professional advice on our analysis. He did not reappraise the properties discussed in this report nor review the appraisals. We conducted our work from September 2000 through August 2001, in accordance with generally accepted government auditing standards. In addition to those named above, Jay Cherlow, Christine Colburn, Jennifer Duncan, Cynthia Rasmussen, and Amy Webbink made key contributions to this report. | Since 1781, the federal government has transferred or sold about 1.1 billion acres to nonfederal entities--such as state and local governments, businesses, nonprofit groups, and individual citizens--under various initiatives that promoted general economic development, developed transportation systems, supported public schools, and encouraged settlement of the western frontier. Today, the Bureau of Land Management (BLM) and the Forest Service administer about seventy percent of the 657 million acres that remain in federal ownership. These agencies continue to transfer and sell federal land, but under more limited circumstances. For example, a community might want to develop a public park, a nonprofit group might want land for a shooting range, or a homeowner might want to obtain clear property title after mistakenly building part of his house on federal land. During fiscal years 1991 through 2000, BLM alone was authorized by law to transfer land. BLM transferred about 79,000 acres during this period under four key statutes and received about $3 million. BLM and the Forest Service are both authorized by law to sell land and are directed by law to receive at least fair market value when they do so; BLM has broader authority and has sold much more land, about 56, 000 acres, and received about $74 million. In contrast, the Forest Service sold only about 2,000 acres, all noncompetitively, and received about $5 million. When BLM and Forest Service sold land, they both generally received at least the appraised value. BLM generally offered land for competitive sale when agency personnel believed there was more than one potential buyer for the parcel; in these sales, the agency used appraised values as starting bids--that is, as minimum sale prices--and received prices that were, on average, about eighteen percent higher than appraised values. When BLM or the Forest Service sold land noncompetitively, they generally set the sale price at the appraised value. Some of the parcels the agencies sold noncompetitively might have been more appropriately offered for competitive sale, and in some of the noncompetitive sales, the appraised value underestimated the fair market value because it was not based on the land's current or planned use. |
This section describes (1) electricity generation and consumption in the United States, (2) federal and state actions that have influenced electricity generation and consumption, (3) electricity reliability, and (4) federal and state regulation. The electricity system includes four distinct functions: generation, transmission, distribution, and system operations (see fig. 1). Electricity may be generated at power plants by burning fossil fuels; through nuclear fission; or by harnessing renewable sources such as wind, solar, geothermal energy, or hydropower. Once electricity is generated, it is sent through the electricity grid, which consists of high-voltage, high- capacity transmission systems, to areas where it is transformed to a lower voltage and sent through the local distribution system for use by industrial, commercial, residential, and other consumers.process, system operations are managed by a system operator, such as a local utility, that must constantly balance the generation and consumption of electricity. To do so, system operators monitor electricity consumption from a centralized location using computerized systems and send minute-by-minute signals to power plants to adjust their output to match changes in consumption. Various federal and state actions have influenced electricity generation. Regarding federal actions, in April 2015, we found that from fiscal year 2004 through 2013, federal programs aided the development of new electricity-generating capacity through various means, including outlays, loan programs, and tax expenditures. In more recent years, federal actions have been targeted toward renewable sources such as wind and solar, although there has also been federal support for coal, nuclear, and natural gas-fueled generation. For example, two tax credits—the Production Tax Credit (PTC) and the Investment Tax Credit (ITC)—and a related program that provided payments in lieu of these tax credits supported wind and solar electricity by lowering the costs associated with electricity generation and providing an incentive to those firms engaged in the construction and operation of wind and solar projects. The Department of the Treasury estimated that these two tax credits resulted in almost $12 billion in revenue losses for the federal government from fiscal year 2004 through 2013. In addition, the related payment program provided almost $17 billion in outlays from fiscal year 2004 through 2013. EIA recently estimated that wind, solar, and other renewables, accounted for about 72 percent of all electricity-related direct federal financial interventions and subsidies in fiscal year 2013. Regarding state actions, our April 2015 report found that key state supports aided the development of electricity generation projects— particularly renewable ones—in most states, from fiscal year 2004 For example, we found that as of September 2014, 38 through 2013.states and the District of Columbia had established renewable portfolio standards or goals.service providers obtain a minimum portion of the electricity they sell from renewable sources, creating additional demand for renewables. Retail service providers meet these requirements in various ways, such as by building renewable generating capacity or purchasing renewable generation from other producers through long-term contracts known as power purchase agreements. Such policies mandate or set goals that retail Federal and state activities have also encouraged energy efficiency, which can reduce the consumption of electricity. For example, Treasury estimated that energy-efficiency-related federal tax expenditures, such as for household energy efficiency improvements and the purchase of energy efficient equipment, amounted to over $15 billion in forgone revenue for the federal government from fiscal year 2000 through 2013. State governments have also played an important role in encouraging energy efficiency. According to the American Council for an Energy- Efficient Economy, as of April 2014, 25 states had fully funded policies in place that establish specific energy savings targets that utilities or nonutility program administrators must meet through customer energy efficiency programs. In March 2014, we found that the federal government has also made efforts to facilitate activities that encourage customers to reduce demand when the cost to generate electricity is high, known as demand-response activities. These efforts have included actions to fund the installation of advanced electricity meters that facilitate these demand-response activities, as well as regulatory efforts to encourage demand-response activities. Specifically, Treasury estimated that forgone revenue associated with the credit for energy efficiency improvements to existing homes amounted to $10.36 billion, the credit for residential energy efficiency property amounted to $3.08 billion, and the exclusion of utility conservation subsidies amounted to $2.04 billion from fiscal 2000 through 2013. easily and inexpensively stored, electricity generated must be matched with demand, which varies significantly depending on the time of day and year. To maintain a reliable supply of electricity, system operators take steps to ensure that power plants will be available to generate electricity when needed. In doing so, system operators typically ensure available capacity exceeds estimated demand so that any unexpected increases in demand or power plant or transmission outages can be accommodated without consumers losing access to electricity. Maintaining a reliable supply of electricity is a complex process requiring the system operator to coordinate three broad types of services as follows: Capacity: Operators procure generating capacity—long-term commitments to have available specific amounts of electricity- generating capacity to ensure that there will be sufficient electricity to reliably meet expected future electricity needs. Procuring capacity may involve operators of power plants committing that existing or new power plants will be available to generate electricity in the future, if needed. Energy: Operators schedule which power plants will generate electricity throughout the day—referred to as energy scheduling—to maintain the balance of electricity generation and consumption. Ancillary services: Operators procure several ancillary services to maintain a reliable electricity supply. Ancillary services generally involve resources being available on short notice to increase or decrease their generation or consumption. These and other services are needed to ensure supply and demand remain in balance so that electricity can be delivered within technical standards—for example, at the right voltage and frequency—to keep the grid stable and to protect equipment that needs to operate at specific voltage and frequency levels. Responsibility for regulating electricity prices is divided between the states and the federal government. Most electricity consumers are served by retail markets that are regulated by the states, generally through state public utility commissions or equivalent organizations. As the primary regulator of retail markets, state commissions approve many aspects of utility operations, such as the siting and construction of new power plants, as well as the prices consumers pay and how those prices are set. Prior to being sold to retail consumers, electricity may be bought, sold, and traded in wholesale electricity markets by a variety of market participants, including companies that own power plants, as well as utilities and other retail service providers that sell electricity directly to retail consumers. Wholesale electricity markets are overseen by the Federal Energy Regulatory Commission (FERC). During the last 2 decades, some states and the federal government have taken steps to restructure electricity markets with the goal of increasing competition. The electricity industry has historically been characterized by utilities that were integrated and provided the four functions of electricity service—generation, transmission, distribution, and system operations— to all retail consumers in a specified area. In much of the Western, Central, and Southeastern United States, retail electricity delivery continues to operate under this regulatory approach, and these regions are referred to as traditionally regulated regions. In parts of the country where states have taken steps to restructure retail electricity markets, new entities called retail service providers compete with utilities to provide electricity to retail consumers by offering electricity plans with differing prices, terms, and incentives. Beginning in the late 1990s, FERC took a series of steps to restructure wholesale electricity markets, and wholesale electricity prices are now largely determined by the interaction of supply and demand rather than regulation. In addition, FERC encouraged the voluntary creation of new entities called Regional Transmission Organizations (RTO) to manage regional networks of electric transmission lines as system operators—functions that had traditionally been carried out by local utilities. In addition to its role in regulating aspects of the electricity market, FERC is also responsible for approving and enforcing standards to ensure the reliability of the bulk power system—generally the generation and transmission systems. FERC designated the North American Electric Reliability Corporation (NERC) to develop and enforce these reliability standards, subject to FERC review. These standards outline general requirements for planning and operating the bulk power system to ensure reliability. For example, one reliability standard requires that system planners plan and develop their systems to meet the demand for electricity even if equipment on the bulk power system, such as a single generating unit or transformer, is damaged or otherwise unable to operate. According to our analysis of SNL data, the mix of energy sources used to generate electricity has generally shifted to include more natural gas, wind, and solar, but less coal and nuclear, from 2001 through 2013, though the extent of these changes varied by region. Growth in electricity consumption has generally slowed, with key differences among different types of consumers and regions. Natural gas, wind, and solar sources provided larger portions of the nation’s electricity mix from 2001 through 2013 in terms of both generating capacity and actual generation, while coal and nuclear sources provided smaller portions, according to our analysis of SNL data (see fig. 2). At the time of our analysis, 2013 was the most recent year with complete data for both generating capacity and generation. The growth or decline in specific energy sources varied over this time period and across U.S. regions. (See app. III for additional information on electricity-generating capacity and actual generation by region.) SNL data on power plants under construction and planned for retirement suggest that these recent trends are likely to continue. Generating capacity and actual generation from natural-gas-fueled power plants increased across the nation from 2001 through 2013, with different regions seeing varying levels of growth, according to our analysis of SNL data. Natural-gas-fueled generating capacity increased by about 181,000 MW during this period, and accounted for 72 percent of the new generating capacity added from all sources.capacity resulted from the construction of about 270,000 MW during this period offset by a smaller amount of retirements. Regarding actual generation, electricity generated from natural-gas-fueled power plants generally increased throughout this period, with a pronounced jump from 2011 through 2012 when generation increased by about 21 percent (see This increase in gas-fueled fig. 3). The average utilization of natural-gas-fueled capacity—a measure of the intensity with which capacity was operated—varied over this period, declining from about 30 percent in 2001 to a low of about 20 percent in 2003 before generally increasing to about 27 percent in 2013.Increases in gas-fueled capacity and generation led to natural gas accounting for a larger share of the nation’s electricity mix, increasing from 17 percent of generation in 2001 to 26 percent in 2013. All but one region of the country experienced increases in the amount of electricity generated from natural gas over this period. Specifically, electricity generated from natural gas declined in Alaska and increased in the rest of the United States, ranging from an increase of 5 percent in Texas to almost 200 percent in some regions in the East. In some regions, natural gas became an increasingly significant energy source in the generation mix. For example, in New England, natural gas increased from 31 percent of the region’s electricity generation in 2001 to 42 percent in 2013. According to EIA, lower natural gas prices, regional environmental initiatives, and other factors have contributed to increases in gas-fueled electricity generation. As the use of natural gas to generate electricity has increased since 2001, the mix of technologies used in gas-fueled power plants has also changed. Specifically, combined-cycle plants, which use a combustion turbine in conjunction with a steam turbine to generate electricity, have become an increasingly common technology for generating electricity— growing from 7 percent of total electricity generation in 2001 to 23 percent in 2013, according to SNL data (increasing from 42 percent of electricity generated from gas in 2001 to 86 percent in 2013).expensive to build initially, such plants are more fuel-efficient than simpler combustion turbine plant designs. This efficiency can make it economically feasible to generate electricity with natural gas for sustained periods. As a result, these plants can be economically operated like traditional baseload generation such as coal and nuclear plants, which often run continuously for long periods of time. Trends in the utilization of combined-cycle and other gas-fueled power plants differed over this period. Utilization decreased for all gas-fueled capacity in the early 2000s, but while it has increased since 2003 for combined-cycle capacity (from 34 percent in 2003 to almost 44 percent in 2013), utilization has declined somewhat for other gas-fueled technologies (from 12 percent in 2003 to 8 percent in 2013). Generating capacity and actual generation from wind and, to a lesser extent, solar power plants increased from 2001 through 2013, with most of the increase occurring since 2007. (See fig. 4.) We have previously found that various federal and state actions have contributed to increases in wind and solar power plant capacity, including financial supports and state renewable portfolio standards. These increases led to wind and, to a lesser extent, solar accounting for a larger share of the nation’s energy mix, increasing from just over 0 percent of electricity generation in 2001 to 4 percent in 2013. Regarding wind, generating capacity increased about sixteen fold over this period, with 57,000 MW of capacity added from 2001 through 2013 and wind’s share of total generating capacity increasing from just over 0 percent in 2001 to 5.4 percent in 2013. However, these plants operate less intensively than some other sources because wind power plants only generate electricity when the wind is blowing. As such, wind’s share of the nation’s actual generation increased from just over 0 percent in 2001 to about 4 percent in 2013. Generation from wind increased by over 160 million MWh from 2001 through 2013, the second largest increase in actual generation of all energy sources after natural gas. Most of this increase, 136 million MWh (or 84 percent of the total increase), occurred since 2007. The average utilization of wind power plants fluctuated over this period between 26 and 33 percent. Electricity generated from wind is concentrated in a few states; as shown in table 1, 74 percent of total electricity generated from wind came from 10 states in 2013. In addition, wind can contribute a substantial portion of generation in some areas. For example, in the Upper Midwest region of the country, including states such as Minnesota and Iowa, about 14 percent of the region’s electricity came from wind power plants. In addition, representatives from one utility told us they have had hours where 60 percent of the electricity produced on their system came from wind sources, and their system has experienced longer periods with over 50 percent wind generation. By contrast, other regions of the country, such as the southeastern United States, produced less than 1 percent of their total electricity from wind in 2013. Regarding solar, generating capacity increased by about 7,000 MW, or about eighteen-fold, from 2001 through 2013 at larger power plants with capacities of at least 1 MW. This trend accelerated in 2014 with the addition of over 3,000 MW of solar generating capacity, and total solar generating capacity reached about 10,000 MW. Regarding actual generation, electricity generated at large solar power plants increased about 7 fold—by about 5 million MWh—from 2001 through 2013. The average utilization of solar power plants fluctuated over this period between 16 percent and 25 percent. Despite the growth in solar capacity and generation, large solar power plant generation contributed less than 0.2 percent of total electricity generation nationwide in 2013. More so than wind generation, generation from solar power plants was concentrated in a small number of states. For example, California and Arizona accounted for over half of electricity generation from large solar power plants in 2013. Association, Solar Market Insight Report 2014 Q4 (Mar. 4, 2015). In addition, since 2010, EIA has collected data on solar and other generating capacity that is “net metered”— when consumers can use electricity they generate that is in excess of their consumption at some times to offset consumption at other times. Though these data have limitations, they suggest that distributed net-metered solar capacity has been a large portion of total solar capacity. Generating capacity and actual generation from coal-fueled power plants declined from 2001 through 2013 as plants retired and in some cases, witnessed changes in their usage patterns, according to our analysis of SNL data. Coal-fueled electricity-generating capacity was stable for most of this period, but declined over the last couple years as aging plants retired and little new capacity was added. Specifically, from 2001 through 2013, about 29,500 MW of coal-fueled generating capacity retired, with about 75 percent of those retirements occurring from 2009 through 2013. In our October 2012 and August 2014 reports, we found that a number of factors have contributed to companies retiring coal-fueled power plants, including comparatively low natural-gas prices, the potential need to invest in new equipment to comply with environmental regulations, increasing prices for coal, and low expected growth in demand for electricity. We found that the facilities that power companies have retired or plan to retire are generally older, smaller, and more polluting, and some had not been used extensively. Actual generation from coal declined—in particular since 2008—as natural gas prices fell and made coal-fueled power plants comparatively less competitive (see fig 5). Generation from coal declined in most regions of the country. Several regions, such as New England, experienced large decreases as they shifted away from coal. As coal- fueled generation has declined, coal-fueled power plants have, in general, been utilized less intensively. The average utilization of coal-fueled capacity fluctuated around 70 percent from 2001 through 2008 and then began a general decline to about 59 percent in 2013. For example, representatives from the system operator ISO New England told us that their region no longer regularly uses its coal-fueled power plants to generate baseload electricity. plants are more often used to generate electricity during peak periods or when other resources are not available. Retirements of some coal- fueled power plants and the decrease in usage among others led to coal accounting for a smaller share of the nation’s generating capacity and generation. ISO New England serves Connecticut, Maine, Massachusetts, New Hampshire, Rhode Island, and Vermont. Generating capacity and actual generation from nuclear power plants both increased from 2001 through 2013, but the share of nuclear in the national electricity mix declined because other sources increased by a larger amount, according to our analysis of SNL data. No new nuclear power plants were built during this period, and four nuclear power plants retired in the last 2 years, accounting for about 4,200 MW of capacity. However, nuclear generating capacity increased by 5 percent from 2001 through 2013 because of capacity increases at some existing plants as owners upgraded equipment or undertook other changes. Regarding actual generation, electricity generated at nuclear power plants increased by 3 percent. The average utilization of nuclear power plants fluctuated around 90 percent throughout this period. Since nuclear plants tend to be larger capacity plants that run continuously for long periods of time, the retirement of a single plant can have significant effects on a regional power system. For example, representatives at ISO New England said that the Vermont Yankee nuclear power plant, which retired in December 2014, had generated about 5 percent of total electricity generation in their region in 2014. Since nuclear generating capacity and generation did not increase as much as gas, wind, and solar, nuclear accounted for a slightly smaller share of the national electricity mix, decreasing from 21 percent of generation in 2001 to 20 percent in 2013. The contributions of other energy sources to the nation’s energy mix have also changed according to our analysis of SNL data, as follows: Hydropower: Generating capacity and actual generation from hydropower plants increased from 2001 through 2013, by 3,600 MW and 68 million MWh respectively. Generation from hydropower plants varies from year to year based on a region’s weather, particularly the amount of rain or snow, according to EIA. The western region generates more electricity from hydropower than any other region and accounted for 57 percent (about 39 million MWh) of the increase in generation during this period. The average utilization of hydropower capacity fluctuated between 28 percent and 38 percent throughout this period. While hydropower generating capacity increased in absolute terms through new construction and increases in capacity at existing hydropower plants, its share of capacity declined because hydropower generating capacity did not increase as much as other sources, such as natural gas and wind. Other sources: Generating capacity and actual generation from other sources—including oil, biomass, and geothermal together—declined overall from 2001 through 2013. This decline was primarily driven by declines in oil-fueled power plants, where generation declined by over 80 percent and average utilization declined over the period. Two regions, New England and Florida, accounted for a large portion of the decline in oil-fueled power plant generation. Although oil was a relatively small portion of overall generation in the beginning of the period, its share of generation declined further as oil prices rose in the mid-2000s. Generating capacity and actual generation from biomass, geothermal, and other sources increased overall from 2001 through 2013. These changes had little effect on the overall national electricity generation mix, as these other sources represent a small and stable portion of generation—about 2 percent of the national total in both 2001 and 2013. Our analysis of SNL data on generating capacity currently under construction and companies’ plans to retire generating capacity suggests that these general changes in the electricity generation mix are likely to continue. Figure 6 shows the amount of generating capacity under construction, the amount planned for retirement from 2015 through 2025, and the net change (capacity under construction minus planned for retirement), and highlights that natural gas, wind, and solar capacity may continue to increase. There is no coal capacity under construction, and while about 6,000 MW of nuclear capacity is under construction, more nuclear capacity (about 15,000 MW) is planned for retirement than is under construction.construction-planning stages or that has not formally announced retirement. Continuing a long-term trend, growth in electricity consumption slowed from 2001 through 2014. According to EIA data on annual national electricity retail sales—a proxy for end-use consumption—the rate of growth of electricity consumption has slowed in each decade since the 1950s, from growing almost 9 percent per year in the 1950s, to over 2 percent per year in the 1980s and 1990s. This decreasing growth trend continued in the 2000s, with electricity retail sales growing by over 1 percent per year from 2001 through 2007, and fluctuating, but remaining largely flat from that time through 2014. These overall trends mask differences in consumption patterns for different types of consumers, in different regions, and during peak periods of consumption. Regarding consumers, industrial electricity consumption has decreased since 2001, while commercial and residential consumption have increased. Specifically, industrial consumption decreased by 4 percent over the period from 2001 through 2014, and the sector’s share of total electricity consumption declined from 29 percent to 26 percent. Meanwhile, residential electricity consumption increased 17 percent, and commercial consumption increased 25 percent over this period. Regarding regional differences, consumption patterns have varied across the country. For example, consumption declined by almost 5 percent in the Northeast (Mid-Atlantic and New England states) since the recession of 2007 and through 2014, while it increased by over 9 percent in the West South Central states of Texas, Louisiana, Oklahoma and Arkansas over that same period. (See app. IV for additional information on consumption by consumer type and region.) In contrast to the slowdown in the growth of overall electricity consumption, peak consumption has, in some cases, increased. Peak consumption refers to the level of electricity consumed when the overall system usage is at its highest, such as during hot days when air conditioning usage is high.in some instances, differed from changes in total consumption over the course of a year. For example, in New England, while overall consumption has declined, peak consumption has risen according to EIA. Distributed generation and electricity consumption data Growth in distributed generation such as rooftop solar may have also contributed to changes shown in EIA’s data on retail electricity sales. Households and commercial facilities that generate some of their own electricity displace some electricity sales. Therefore, actual electricity consumption may be higher than suggested by retail electricity sales data. According to EIA, this effect is difficult to measure because data on electricity generated from distributed generation sources are not readily available. Changes in the economy: Changes in electricity consumption are often closely linked to the economy, according to EIA.the economic recession from late 2007 through 2009 was associated with a large drop in electricity consumption in the industrial sector. Since many industrial operations operate more evenly throughout the year, declines in industrial operations could lead to reduced electricity consumption throughout the year. Efficiency improvements: Overall improvements in the efficiency of technologies powered by electricity—such as household appliances and others—have slowed the growth of electricity consumption, according to EIA. For example, according to EIA, a new refrigerator purchased today uses less than a third as much electricity as one purchased in the late 1970s, despite the larger size of today’s refrigerators. Changes in the uses of electricity: Consumer uses of electricity have changed over the last decades, affecting the nature of electricity consumption. For example, the growing use of computers and home entertainment devices has increased the use of electricity. In addition, air conditioning has become more widely used in U.S. households. As a result, a heat wave—often associated with peak levels of electricity consumption—may lead to more electricity consumption during peak periods than in the past. Demand-response activities: Another factor that may have affected consumption trends, particularly peak consumption, is the increasing use of demand-response activities—steps taken to encourage consumers to reduce consumption during periods of high demand when the costs to generate electricity are high. For example, system operators may call on industrial consumers to reduce their electricity usage during periods of high demand in exchange for a payment or other financial incentive. In March 2014, we cited FERC data suggesting that the extent of demand-response activities had increased overall—more than doubling from 2005 to reach about 8.5 percent of potential reduction in peak consumption in 2011. According to literature we reviewed and stakeholders we interviewed, changes in electricity generation and consumption have required system operators to take additional actions to maintain reliability. Changes in generation and consumption, together with additional actions system operators have taken to maintain reliability, have affected consumer electricity prices to varying extents, though the net effect on prices is unclear. According to several stakeholders we interviewed and literature we reviewed,operators to take additional actions to reliably provide electricity to consumers, as follows: changes in generation and consumption have led system Increased reliance on natural gas: The increased reliance on natural gas to generate electricity in some regions of the country has sometimes required system operators to take additional actions to maintain reliability. Although all fuel-based electricity generation can face fuel supply challenges, natural-gas-fueled power plants face different challenges than sources such as coal, oil, and nuclear. For example, natural gas is not easily stored on site, so the ability of a natural-gas-fueled power plant to generate electricity generally depends on the real-time delivery of natural gas through a network of pipelines. Some regions have recently experienced challenges in maintaining the delivery of natural gas supplies to power plants. For example, in January 2014, a severe cold weather event know as a “polar vortex” affected much of the central and eastern United States, causing significant outages at plants using various fuel sources and leading to higher than normal demand for natural gas for both electricity generation and home heating. According to FERC, there were no widespread electricity outages. However, challenges delivering fuel to natural-gas-fueled power plants posed significant concerns and resulted in outages at some natural-gas-fueled power plants. System operators took various steps to limit the effect of this event, including relying on power plants that utilize other fuel sources that were more readily available at that time, such as coal and oil, issuing public appeals for conservation, utilizing demand-response resources, and implementing certain emergency procedures. Going forward, several stakeholders raised concerns about the sufficiency of natural gas pipeline capacity in some regions to meet potential greater future needs. However, FERC has reported that actions taken since the 2013–2014 winter—including improved communications between the electricity and natural gas industries and additional cold-weather preparation—led to better operational performance during the 2014– 2015 winter, which also presented extremely challenging cold-weather conditions. In addition, a recent Department of Energy (DOE) study suggests that the future needs for interstate natural gas pipelines may be modest relative to the historical level of pipeline capacity additions. Effects of distributed generation on system operations to maintain reliability The addition of distributed generation such as rooftop solar can present unique challenges that system operators must manage to maintain reliability. Several stakeholders told us that because distributed generation occurs behind a consumer’s meter, such as at an individual residence or business, changes in generation are not visible to or controllable by the system operator without the installation of specialized technology. Regarding the lack of visibility, increases in distributed generation would be seen by the system operator as decreases in demand, since the electricity generated is used on-site and displaces electricity that would have been provided through the grid. Because system operators only see the net effect of these changes, it is more difficult for them to understand and predict demand. Regarding lack of control, if distributed generation results in more electricity than customers can use on site, electricity flows can exceed equipment technical specifications, which could require equipment upgrades. Additionally, if there is more distributed generation than can be used by all customers, the imbalance of supply and demand could put the stability of the grid at risk. Accommodating increased distributed generation may therefore require system operators to, among other things, use models to predict distributed generation patterns or install advanced controls to make distributed generation visible to and controllable by the utility in order to maintain electric reliability. GAO-12-635. the polar vortex. According to ISO New England’s system plan, preserving the reliable operation of the system will become increasingly challenging as a result of expected retirements, and the region is in a precarious position for the next several winters as retirements continue and actions to address retirements—such as investments in the addition of new transmission and power plants— are years away from completion. Changes in electricity consumption. Changes in electricity consumption may require system operators to take additional actions to maintain reliability both in the long and short-term. Over the long- term, system operators need to ensure they have sufficient generating and transmission capacity to meet forecasted consumer electricity needs. This means that a system operator may need to continually add more transmission or generation capacity when peak demand is rising, even if average consumption is stable or declining. In the short- term, system operators may need to take actions to increase or decrease the use of power plants and demand-response resources to address deviations between forecasted and actual consumption. According to NERC, the electricity industry faces several challenges in forecasting electricity consumption, because conservation programs, distributed generation, and other changes in electricity consumption have increased the uncertainty of traditional forecasting methods used in long-term and short-term planning. The degree to which system operators have had to take additional actions to maintain reliability in response to changes in generation and consumption varies regionally based on the extent of these changes and other characteristics. For example, the extent to which system operators manage the grid in response to wind and solar growth will depend on factors such as the relative amount of generation from wind and solar power plants compared to traditional power plants, the size of a region’s grid and how interconnected it is with neighboring grids, and other factors. In this regard, representatives of Midcontinent Independent System Operator said they have been able to reliably accommodate larger amounts of wind generation without major operational challenges or the need for significant additional ancillary services because the large size of their grid and its extensive connections to neighboring grids provide a broad base of power plants that system operators can use to balance variations in the output of wind power plants.literature we reviewed and representatives of the largest utility in Hawaii, while that state has been able to reliably integrate high levels of wind and solar, its isolated island grids means it has no neighboring grids to turn to for balancing variations in the output of wind and solar electricity generation. Therefore, system operators there have fewer backup resources to turn to in the event of an unexpected change in wind and solar output than system operators managing larger, more integrated grids. Changes in generation and consumption, together with associated actions system operators have taken to maintain reliability, have influenced consumer electricity prices in complex, interrelated, and sometimes contradictory ways, and the net effect of these changes on consumer prices is unclear, based on our review of literature and discussions with stakeholders. National average real consumer electricity prices were nearly 11 percent higher in 2014 than 2001, but prices over this period fell in 5 years, rose in 6 years, and were relatively stable in 2 years (see fig.7). Prices and trends vary by consumer type and region. (App. V provides additional information on prices by consumer type and region.) Several stakeholders we interviewed and literature we reviewed highlighted several ways changes in generation and consumption, together with associated actions system operators have taken to maintain reliability, have influenced electricity prices. In many cases, these changes in generation and consumption affect prices at the wholesale level. The extent to which and how quickly such wholesale price changes flow through to retail consumer prices depends on a region’s regulatory structure, individual retail contracts, consumer type, and other factors. A complete assessment of these factors and their net effect was outside the scope of this report. Nevertheless, literature and stakeholders highlighted the following ways changes have influenced prices: Wholesale electricity prices and natural gas prices have tended to move in tandem. Increases in gas-fueled generation have influenced electricity prices, and average annual prices of natural gas and wholesale electricity—electricity for resale—at key hubs have generally moved in tandem since 2002, the earliest year for which data are available. (Fig. 8 shows real annual average natural gas prices and electricity prices at a key wholesale gas hub and a key electricity hub.) Specifically, natural gas prices more than doubled from 2002 to a peak in 2005, declined somewhat, and peaked again in 2008. According to EIA, these increases in prices were initially due to increasing demand for natural gas and hurricanes that disrupted Gulf Coast natural gas production, among other factors. Natural gas prices dropped in 2009 and have remained low since—the result of lower demand due to the economic recession and increasing natural gas production from development of shale gas resources, among other factors. These changing natural gas prices generally contributed first to higher and then lower wholesale electricity prices since 2002. Additionally, as discussed previously, pipeline constraints and competing demands have affected the delivery of natural gas in some regions. This situation has influenced natural gas and wholesale electricity prices during the winter months. For example, during January 2014, the month a polar vortex occurred, monthly natural gas and wholesale electricity prices in New England—a region heavily dependent on natural gas for generating electricity—reached their highest levels, according to available historical data. Prices moderated the following winter, with January 2015 wholesale electricity prices in New England around 60 percent lower than prices the previous January. More generally, FERC reported that wholesale electricity prices were more moderate in January and February 2015 compared to January and February 2014, helped by more stable and less volatile natural gas prices. Negative wholesale electricity prices In some instances, wholesale electricity markets experience negative prices—that is, power plant owners paying consumers to take their electricity. For example, owners of certain power plants are sometimes unwilling or unable to reduce their generation even if there is little or no demand for the electricity they generate. This can be the case for owners of wind plants, which may receive $23 per MWh of electricity generated from the federal Production Tax Credit, sometimes making it economically beneficial for these wind plants to pay consumers to take their electricity so they can continue to receive the credit. It can also be the case for power plants that are costly to shut down and restart, such as nuclear plants. Owners of these power plants may be willing to accept negative prices for a short time in order to avoid the cost of shutting the plant down. Our analysis of available hourly data at electricity hubs within U.S. regional transmission organizations indicates that negative prices occurred on average 0.7 percent of the time from 2005 through 2014. Specific trends in instances of negative prices varied by electricity hub, and the annual percent of negative prices varied across the hubs, ranging from 0 percent to 9.8 percent over that time period. In most cases, any payment consumers might receive as a result of these negative prices is more than offset by the cost of purchasing electricity in other hours. However, negative prices could affect the profitability of individual power plants in areas where negative prices occur. expected to contribute to lower prices.regionally and over time based on, among other things, what alternative power plants exist in a region, the cost of those alternatives, and the amount of federal and state financial support for wind and solar development. For example, according to a DOE study published in 2014, the average cost of procuring electricity from wind power plants was lower than the cost of purchasing electricity through the wholesale markets in 2005—a time of high natural gas and wholesale electricity prices. Conversely, in 2009, after the price of natural gas and wholesale electricity had dropped, the average cost of procuring electricity from wind power plants was higher than the cost of purchasing electricity through the wholesale markets. Some of the costs of wind and solar projects are paid for by taxpayers, which can offset the prices that some retail consumers may have otherwise had to pay for electricity generated from wind and solar. According to this DOE study, prices for procuring wind have been lower as a result of federal and, in some cases, state tax incentives. Second, as with the addition of other new power plants, the effect of new wind and solar sources on consumer prices also depends on the relative costs of any transmission and ancillary services system operators determine are needed to reliably integrate wind and solar sources into the grid. To the extent that additional ancillary services and transmission upgrades are needed, these costs may be passed on to consumers, contributing to higher electricity prices. For example, Texas recently completed a significant transmission project primarily designed to move electricity generated by wind power plants in remote parts of the state to population centers, such as Dallas and Austin. The project has cost close to $7 billion, which will be recovered from Texas electricity consumers through retail electricity prices. Traditional power plants also face grid integration costs. Taken all together, the addition of wind and solar sources could have contributed to higher or lower consumer electricity prices at different times and in different regions. Financial viability of baseload power plants Lower utilization and lower electricity prices have affected the financial viability of some power plants that have traditionally operated as baseload plants in restructured regions, according several stakeholders we interviewed and literature we reviewed. In some instances, baseload plants have been utilized less often in recent years as natural gas-fueled plants have become more cost competitive and the levels of wind and solar generation have increased. Additionally, lower annual wholesale electricity prices starting in 2009 have reduced the revenue power plants earn when they are operating. According to several stakeholders and literature, these factors have sometimes made it difficult for baseload power plants to recover their costs and earn a profit. These difficulties can be exacerbated if additional investment is needed to continue to operate the power plant, for example, the installation of pollution controls to comply with environmental regulations. Some baseload coal and nuclear plants have retired in recent years, with these factors reportedly influencing their decision. For example, Entergy retired its 604 MW Vermont Yankee nuclear plant in 2014, which company financial filings attributed to sustained low natural gas and wholesale electricity prices and high power plant costs, among other factors. According to several stakeholders and literature, if plant utilization and wholesale prices remain low, owners could choose to retire more unprofitable plants in the future, which could raise reliability and price concerns. The effect of retirements on prices may vary. The effect of power plant retirements on prices may vary, depending on the cost of the retiring power plant compared to the costs of existing power plants and power plants built to replace retiring power plants, among other things. If retiring plants are less expensive than existing and replacement power plants, their retirement would generally be expected to raise prices. For example, according to EIA, after the initial shutdown of San Onofre Nuclear Generation Station in 2012—a large nuclear power plant in Southern California that produced low- cost electricity—prices in Southern California increased in 2012 and 2013, a change that EIA said is likely attributable in part to the need for more expensive generation in that region to fill the shortage from San Onofre’s closure. Alternately, if retiring power plants are replaced by power plants with similar or lower costs, prices could remain unchanged or decline in some hours. The relative cost of retiring and new power plants depends on the specific circumstances of the retiring and potential replacement plants, and may change over time with changing fuel prices and other market factors. Lower electricity consumption could reduce prices. Lower consumption of electricity—whether in all hours or, particularly, at peak times—can lower the price of electricity in wholesale markets, a decline that may translate into lower prices for retail consumers. Electricity consumption could decline in a given hour, for example, because of demand-response activities in which consumers reduce their electricity consumption in response to prices or other incentives. Electricity consumption could also decline over a longer time period— for example because of reduced consumption due to a slowdown in economic growth or increased adoption of energy efficient technologies. These declines in consumption could lower prices in some or all hours by reducing use of the highest cost plants. According to PJM Interconnection, demand-response activities served as an alternative to generating additional electricity during a heat wave in 2012, which lowered prices. We provided drafts of this product to DOE and FERC for review and comment. The agencies provided technical comments on early or final drafts, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Energy, the Chairman of FERC, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. This report examines changes in electricity markets. Our objectives were to describe what is known about (1) how electricity generation and consumption have changed since 2001, and (2) the implications of these changes on efforts to maintain reliability, and on electricity prices. To conduct this work, we analyzed data on electricity generation and consumption; reviewed literature, including studies by federal agencies, electricity grid operators, and consultants; and summarized the results of interviews with a nonprobability sample of 21 stakeholders. To describe changes in electricity generation, we primarily used data from SNL Financial (SNL), current as of April 3, 2015. We generally present data on changes from 2001 through 2013 because 2013 is the most recent year for which complete data are available, though in some instances we present more recent data. We obtained SNL data on power plants with capacities of at least 1 megawatt that are connected to the grid and intend to sell electricity to retail customers or retail service providers. We used the SNL-identified primary energy source for the most recent year for each generating unit at a given power plant and used data for each generating unit for our calculations, where available.these generating unit level data to calculate total generating capacity and percentage of total generating capacity for each year from 2001 through 2014 (the most recent year with complete data). We calculated similar totals and percentages for actual generation for each year from 2001 through 2013 (the most recent year with complete data). However, some power plants provide generation data at the more detailed generating unit level, while others only provide data for the entire plant.generation calculations, and this unit data accounted for 71 percent of total generation in 2013. When generating unit data were not available, we identified the total actual generation for the year at a given plant and divided it among the units based on share of total generating capacity for each generating unit. These plant level data accounted for the remaining 29 percent of actual generation in 2013. This approach implicitly assumes that all units at a given plant are used with the same intensity to generate electricity, an assumption that may not be appropriate on average. To examine changes in the intensity with which power plants are operated, or their utilization, we analyzed annual capacity factor data—the ratio of actual generation to the maximum potential to generate electricity. Where available, we used the generating unit data for our actual To describe changes in electricity consumption and electricity prices, we examined EIA data on retail sales of electricity to consumers. Retail electricity prices can be difficult to determine, according to EIA, as they depend on a customer’s rate structure, which can differ from utility to utility. EIA does not directly collect data on retail electricity rates. However, using data collected on revenues and electricity sold, EIA calculates average retail revenue per kilowatt hour as a proxy for retail electricity prices. To determine the frequency that negative prices occurred in markets of regional transmission organizations, we analyzed price data from hubs at each of the seven regional transmission organizations.hubs and starting-time periods for the data varied with each regional transmission organization. We obtained hourly wholesale electricity prices The number of from SNL for each regional transmission organization and calculated the number and percentage of occurrences of negative prices in each. We took several steps to assess the reliability of SNL and EIA data. We reviewed relevant documentation, interviewed EIA and SNL representatives, and compared some data elements to those available from other sources. We determined the data were sufficiently reliable for the purposes of this report. To identify the implications of changes, we reviewed literature and interviewed stakeholders. We identified literature by conducting a literature search and obtaining suggestions from the stakeholders we interviewed. Specifically, we searched sources including Proquest Environmental Science Professional, PolicyFile, Web of Science, and the web sites of system operators and federal agencies from December through March 2015. Stakeholders included power plant owners, grid operators, a state regulator, non-governmental organizations, and federal agencies. We identified stakeholders through our research and analysis of changes in generation and consumption, using our past work, and by considering the suggestions of other stakeholders. We selected stakeholders to represent different perspectives and experiences regarding changes in the industry, and to maintain balance with respect to sources of electricity and stakeholders’ roles in the market. Because this was a nonprobability sample, the views of stakeholders we selected are not generalizable to all potential stakeholders, but they illustrate a range of views. Throughout the report we use the indefinite quantifier, “several” when three or more stakeholder and literature sources combined supported a particular idea or statement. Identifying and examining federal agency actions to address the challenges identified was beyond the scope of this review. We conducted this performance audit from November 2014 to May 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Figure 9 shows the territories of eight regional reliability entities that set and enforce reliability standards for the electricity industry and four sub- regions for the Western Electricity Coordinating Council. Table 2 provides generating capacity and annual generation by source in these regions as well as Alaska and Hawaii for select years. Regional Transmission Organizations (RTO) manage regional networks of electric transmission lines as system operators, including operating organized markets for buying and selling electricity and other needed services to operate the grid, such as ancillary services. Figure 10 shows the RTOs in the United States, and table 3 provides generating capacity and actual generation by source for each RTO and generating capacity and actual generation by source outside of RTO regions. Table 4 provides annual generating capacity and generation by regulatory status and source for select years. Table 5 provides generating capacity additions and retirements by source. Table 6 below shows retail electricity sales—a proxy for electricity consumption—by consumer type, and table 7 shows retail electricity sales by region. Table 8 shows average retail revenue per kilowatt hour—a proxy for electricity prices—by consumer type, and table 9 shows average retail revenue per kilowatt hour by region. In addition to the individual named above, Jon Ludwigson (Assistant Director), Eric Charles, Philip Farah, Quindi Franco, Cindy Gilbert, Paige Gilbreath, Michael Kendix, Armetha Liles, Alison O’Neill, MaryLynn Sergent, Maria Stattel, and Barbara Timmerman made key contributions to this report. Electricity Generation Projects: Additional Data Could Improve Understanding of the Effectiveness of Tax Expenditures. GAO-15-302. Washington, D.C.: April 28, 2015. Energy Policy: Information on Federal and Other Factors Influencing U.S. Energy Production and Consumption from 2000 through 2013. GAO-14-836. Washington, D.C.: September 30, 2014. EPA Regulations and Electricity: Update on Agencies’ Monitoring Efforts and Coal-Fueled Generating Unit Retirements. GAO-14-672. Washington, D.C.: August 15, 2014. Electricity Markets: Demand-Response Activities Have Increased, but FERC Could Improve Data Collection and Reporting Efforts. GAO-14-73. Washington, D.C.: March 27, 2014. Wind Energy: Additional Actions Could Help Ensure Effective Use of Federal Financial Support. GAO-13-136. Washington, D.C.: March 11, 2013. Electricity: Significant Changes Are Expected in Coal-Fueled Generation, but Coal is Likely to Remain a Key Fuel Source. GAO-13-72. Washington, D.C.: October 29, 2012. Solar Energy: Federal Initiatives Overlap but Take Measures to Avoid Duplication. GAO-12-843. Washington, D.C.: August. 30, 2012. EPA Regulations and Electricity: Better Monitoring by Agencies Could Strengthen Efforts to Address Potential Challenges. GAO-12-635. Washington, D.C.: July 17, 2012. Electricity Grid Modernization: Progress Being Made on Cybersecurity Guidelines, but Key Challenges Remain to be Addressed. GAO-11-117. Washington, D.C.: January 12, 2011. Electricity Restructuring: FERC Could Take Additional Steps to Analyze Regional Transmission Organizations’ Benefits and Performance. GAO-08-987. Washington, D.C.: September 22, 2008. | Electricity in the United States has traditionally been generated largely from coal, natural gas, nuclear, and hydropower energy sources. More recently, various federal and state policies, tax incentives, and research and development efforts have supported the use of renewable energy sources such as wind, solar, and geothermal. In addition, consumption of electricity has been affected by federal efforts to improve energy efficiency, changes in the economy, and other factors. GAO was asked to provide information on changes in the electricity industry. This report examines what is known about (1) how electricity generation and consumption have changed since 2001 and (2) the implications of these changes on efforts to maintain reliability, and on electricity prices. GAO analyzed data on electricity generation, consumption, and prices and reviewed literature. GAO also interviewed 21 stakeholders, including government officials, and industry representatives, selected to represent different perspectives and experiences regarding changes in the industry. GAO is not making recommendations in this report. The Department of Energy and Federal Energy Regulatory Commission reviewed a draft of this report and provided technical comments that GAO incorporated as appropriate. The mix of energy sources for electricity generation has changed, and the growth in electricity consumption has slowed. As shown in the figure below, from 2001 through 2013, natural gas, wind, and solar became larger portions of the nation's electricity generation, and the share of coal has declined. These changes have varied by region. For example, the majority of wind and solar electricity generation is concentrated in a few states—in 2013, California and Arizona accounted for over half of electricity generated at solar power plants. Regarding consumption, national retail sales of electricity grew by over 1 percent per year from 2001 through 2007 and remained largely flat from that time through 2014. The literature GAO reviewed and stakeholders GAO interviewed identified the following implications of these changes: Maintaining Reliability : System operators, such as utility companies, have taken additional actions to reliably provide electricity to consumers. For example, some regions have experienced challenges in maintaining the delivery of natural gas supplies to power plants. In particular, severe cold weather in the central and eastern U.S. in 2014 led to higher than normal demand for gas for home heating and to generate electricity. Challenges delivering fuel to natural-gas-fueled power plants resulted in outages at some plants. System operators took various steps to limit the effect of this event, including relying on power plants that utilize other fuel sources that were more readily available at the time, such as coal and oil-fueled power plants, and implementing certain emergency procedures. Prices : Increased gas-fueled generation has influenced electricity prices, with wholesale electricity prices and gas prices generally fluctuating in tandem over the past decade. The effect of the increased use of wind and solar sources on consumer electricity prices depends on specific circumstances. Among other things, it depends on the relative cost of wind and solar compared with other sources, as well as the amount of federal and state financial support for wind and solar development that can offset some of the amount that consumers might otherwise pay. Taken together, the addition of wind and solar sources could have contributed to higher or lower consumer electricity prices at different times and in different regions. |
The Department of Defense (DOD) spends more than $700 million each year to move military servicemembers’ and civilian employees’ household goods and personal effects. It pays servicemembers and civilian employees an additional $50 million or more each year in claims for shipment loss and damage. DOD shares liability with carriers for this loss and damage. Both government and carrier costs are significantly affected by the cost of claims. Servicemembers with loss or damage to their household goods and personal effects may file claims against the government for the amount of loss. The military services’ Judge Advocates General have primary responsibility for operating claims offices, adjudicating claims, and for authorizing payment to servicemembers. Payment to members is generally based on the full depreciated value of the damaged or lost items or the cost of repairs, whichever is less. The maximum amount allowed per shipment is $40,000. Claims offices then attempt recovery from the carrier to the extent of the carrier’s liability. From 1967 to 1987, carriers handling military household goods shipments were liable for loss and damage at the rate of $0.60 per pound per article for both domestic and overseas shipments. For example, if a carrier lost or damaged a 70-pound television worth $400, it was liable for the depreciated value or for repairs, whichever was less—up to a maximum of $42 (70 pounds times $0.60). In mid-1987, the Military Traffic Management Command (MTMC)—DOD’s traffic manager—increased carrier liability for DOD domestic household goods shipments. Under the new system, the carrier is liable for the full depreciated value of damaged or lost articles up to a maximum amount (valuation) per shipment based on the shipment weight multiplied by $1.25 per pound. For example, if a shipment weighs 4,000 pounds, the carrier is liable for a maximum of $5,000 (4,000 pounds times $1.25). If only one article in the shipment is lost, and its depreciated value is established at $5,000, the carrier is liable for this amount. In the case of the $400 television, the carrier would be liable for the full depreciated value ($400) or the cost of repairs, whichever is less, and for all other lost or damaged articles in the shipment until the total amount of loss and damage reached $5,000. Carrier liability under this system is generally increased. MTMC increased carrier liability in 1987 based on the results of an Air Force test—Project REVAL. Project REVAL reported that the average amount of household goods claim paid to the servicemember would be reduced by 34 percent on shipments moved at the $1.25 liability rate (purchased for a separate charge of $0.50 per $100 valuation). The Air Force concluded that (1) the increased liability gave the carriers incentive to reduce shipment damage and (2) the combination of reduced average claim amounts and added liability compensation would reduce claims costs for both the government and the carriers. Other major factors in MTMC’s decision to increase carrier liability included (1) the high frequency and cost of damage and loss to military servicemembers’ household goods, (2) the inadequacy of the former liability rate in covering a reasonable share of the liability for losses, (3) the need to provide increased carrier incentive for reducing claims, and (4) increases in government costs associated with military servicemembers’ household goods claims. When DOD increased carrier liability on DOD domestic shipments to the $1.25 rate in 1987, MTMC began paying carriers a separate charge (in addition to transportation charges) for the additional liability. MTMC set this separate charge at $0.64 per $100 of shipment valuation, plus 10 percent of temporary storage charges. The increased liability system adopted by MTMC was similar to that available for commercial shipments in 1987. Carrier transportation rates then automatically provided for carrier liability of $0.60 per pound per article. Increased liability could be purchased for a separate charge. At the time, carrier liability up to $1.25 times the shipment weight was available for $0.50 for every $100 of shipment valuation (shipment weight multiplied by $1.25 divided by $100 multiplied by $0.50). Rates for additional liability on commercial shipments are approved by the Interstate Commerce Commission. However, DOD household goods shipments are governed generally by provisions in DOD rate solicitations, and may differ from commercial practices. The carrier industry objected to moving DOD household goods at the commercial $1.25 rate primarily because military servicemember claims for loss and/or damage are settled by the military services. In commercial practice, the carrier usually settles such claims directly with the owner. The carrier industry generally believed that military claims settlement was too generous and resulted in excessive claims costs to the carrier. At one time, DOD allowed carriers to settle claims directly with the servicemember. This practice was changed, according to DOD, because carrier resolution of claims was found to be unacceptable. MTMC agreed to pay a separate charge of $0.64 per $100 valuation plus 10 percent of temporary storage charges instead of the commercial separate charge of $0.50 per $100 valuation plus 10 percent of temporary storage charges because the military services wanted to retain claims settlement authority for DOD household goods shipments. At the time, carrier industry associations contended that the separate charge for this level of carrier liability should have been $1.13 per $100 valuation, plus 10 percent of temporary storage charges. Shortly before MTMC increased carrier liability on DOD domestic shipments in 1987, the Chairman, House Committee on Armed Services, asked us to review MTMC’s proposed changes and to determine a fair and adequate rate to compensate carriers for the increased liability. We subsequently reported that such a rate could not then be determined because (1) at the time of our review it was too early to determine the impact that increased liability would have on carrier performance and (2) determining a fair and adequate compensation level required a policy judgment about the appropriate performance level to be expected from carriers. We also reported that the $0.64 rate proposed by MTMC would compensate only the better-performing carriers if carriers performed as they did in fiscal year 1985, the most recent year for which adequate claims data was available. We estimated that at the 1985 performance level, approximately $3 million to $4 million in government costs would be transferred to the carriers under the increased liability, and that this should provide increased incentive for carriers to improve their performance. Carriers with liability costs greater than added revenues could (1) improve performance so less damage and loss occurred, (2) increase transportation rates, or (3) absorb the loss. We consequently supported MTMC’s 1987 policies for increasing carrier liability on domestic household goods shipments and concluded that the rate of carrier compensation for the increased liability established by MTMC in 1987 should remain unchanged until carrier performance data or additional cost information indicated that changes were needed. In March 1993, MTMC proposed that carrier liability also be increased from the $0.60 per pound per article rate to the $1.25 rate for international household goods shipments. However, MTMC’s proposal did not include a provision for any separate compensation to carriers for the increased liability. The carrier industry objected to the proposed changes, stating that international shipments were vastly different in nature from domestic shipments, and that no determination had been made of whether MTMC’s 1987 increase in domestic shipment liability had achieved its objectives of reducing the number and amount of damage claims and reducing government costs. During October 1993, MTMC increased carrier liability on international household goods shipments on an interim basis to $1.80 per pound per article, pending the completion of our review. At the request of the former Chairman, Subcommittee on Readiness, House Committee on Armed Services, we evaluated DOD household goods shipment programs to (1) determine the impact of the 1987 increase in carrier liability on domestic shipments and (2) suggest the level and type of carrier liability DOD should adopt for international shipments. During this review, we interviewed officials and reviewed documents associated with programs for the movement of household goods at MTMC, the Department of State, the offices of the Army, Navy, and Air Force Judge Advocates General, and Headquarters, U.S. Marine Corps. We also interviewed and obtained documents from carrier association officials and representatives of selected carriers. To facilitate our analysis, we obtained computerized records on almost 2.5 million DOD household goods shipments moved during fiscal years 1986 through 1991. We obtained computerized shipment and claims data from MTMC on all DOD domestic household goods shipments (MTMC shipment codes 1 and 2) and most international shipments (MTMC shipment codes 4, 7, 8, and J) initiated during fiscal years 1986 through 1991. We analyzed shipment and claims data for each of these codes. However, unless otherwise indicated, the data presented in this report for domestic shipments refers to uncontainerized van shipments (code 1) and to containerized international shipments (code 4). These two major types of shipments comprise the vast majority of DOD household goods shipments by both number and weight. MTMC data was not available for shipments occurring prior to fiscal year 1986. Also, we did not evaluate data for fiscal years after 1991 because considering the 2-year statute of limitations for servicemembers to file household goods claims against the government, inadequate time has passed to obtain sufficient claims data for analysis on these shipments. To verify the accuracy of claims data in the MTMC shipment records, we obtained from the offices of the service Judge Advocates General all computerized claims payment and recovery data available as of August 1993 for fiscal years 1986 through 1991. Only the Air Force could provide complete claims data for all years requested. The Army and the Marine Corps data was complete only for fiscal years 1988 through 1991 because claims data records for these services were not computerized prior to fiscal year 1988. We did not obtain data from the Navy because this service has not computerized its claims records. We did not attempt to manually review claims payment and recovery records because of the time and resources such analyses would require. We then did a computer matching of MTMC shipment data with the available military service claims data by Government Bill of Lading number. We associated all recorded claims data with the shipments involved, regardless of when the claims were filed, adjudicated, paid, and recovered. We used this method rather than rely on summarized military service claims payment and recovery records because the services summarize this information according to the fiscal year in which payment and recovery occurred. Service claims payment and recovery on a shipment often occurs in a different fiscal year than the one in which the shipment was moved. We evaluated carrier performance and claims costs by computer sorting the available shipment and claims data by carrier identification codes. In some cases, we also sorted carrier data by a specific traffic route. Carrier industry representatives (the American Movers Conference and the Household Goods Forwarders’ Association of America, as well as selected carriers) and DOD reviewed and concurred with our methodology for analyzing household goods shipment and claims data before we did this analysis. MTMC and each of the military services providing shipment and claims data concurred with the accuracy of the results of our data analysis. To perform this analysis, we combined data using different computer languages and formats into a single, common database. We provided our computer programs and analysis results to MTMC at the conclusion of our review because this information has many potential applications for the improved management of household goods activities, particularly those associated with evaluating individual carrier performance and military claims office adjudication and recovery efforts. To adjust our cost data for the effect of inflation, we used the Consumer Price Index to convert actual dollars to constant fiscal year 1993 dollars. We conducted our review from May 1993 to November 1994 in accordance with generally accepted government auditing standards. DOD claims costs declined after DOD increased carrier liability on domestic household goods shipments in 1987. Our analysis of DOD shipment and claims data for fiscal years 1987 through 1991 showed that DOD saved about $18.9 million in claims costs during this period. DOD would have saved an additional $3.2 million if all the military services had pursued claims recovery from carriers as effectively as the Air Force. Carrier performance on domestic shipments also improved. Although the claims frequency rate remained unchanged at about 20 percent of all shipments, the average amount of claim DOD paid to servicemembers declined under the increased liability from over $800 in fiscal year 1986 to $728 in fiscal year 1991. This represents an overall reduction of about 9 percent. The carrier industry generally opposed increased carrier liability, citing concerns that higher military service recovery levels would result in almost all claims being paid by the carriers. The industry questioned whether increased liability would reduce overall government household goods program costs. Our analysis showed that these concerns did not materialize. DOD claims costs for domestic household goods shipments declined after maximum carrier liability on these shipments was increased in 1987 from $0.60 per pound per article to the $1.25 valuation rate. For example, our analysis of Air Force computerized household goods shipment and claims data showed that the Air Force reduced annual domestic shipment claims costs during fiscal years 1988 through 1991 by 20 to 27 percent compared to the fiscal year 1986 level. This resulted in savings on Air Force shipments totaling about $7 million for the period. We could not determine overall DOD savings with the same accuracy as we could for the Air Force because Army and Marine Corps claims records were not computerized until 1988, and Navy claims records had not been computerized at the time of our review. However, our review of the available data showed that claims costs for the other services also declined. We estimate that increased carrier liability resulted in overall DOD savings totaling about $18.9 million during fiscal years 1987 through 1991. We analyzed claims costs for 363,776 Air Force domestic household goods shipments moved during fiscal years 1986 through 1991. The Air Force paid servicemember claims for loss and/or damage on 75,198, or about 21 percent, of these shipments. Annual Air Force claims costs after implementing the $1.25 rate (fiscal years 1988 through 1991) ranged from 20.3 percent to 27 percent less (an average of almost 24 percent less) than what they were in fiscal year 1986. Table 2.1 compares claims costs at the $1.25 rate after fiscal year 1987 with those at the $0.60 per pound per article rate in fiscal year 1986. The decrease for fiscal year 1987 is much less than for the other fiscal years because the $1.25 rate was implemented in mid-year. Claims costs are expressed in terms of claims cost per hundredweight to minimize the skewing effect of yearly fluctuations in shipment numbers, claims, and weights. We estimate that these claims cost reductions resulted in total savings of $7 million on Air Force domestic household goods shipments during fiscal years 1987 through 1991, or an average savings of about $1.6 million per year for fiscal years 1988 through 1991. We calculated the amount of savings by multiplying the hundredweight cost differences from fiscal year 1986 levels in table 2.1 by total shipment hundredweight for each fiscal year, as shown by table 2.2. Figure 2.1 illustrates the amount saved by comparing claims costs for fiscal years 1986 through 1991 with those that would have occurred in these fiscal years if claims cost per hundredweight levels had remained the same as occurred under the $0.60 rate in fiscal year 1986. For example, the figure shows that fiscal year 1989 claims costs at the $0.60 rate would have been $7.6 million in constant fiscal year 1993 dollars. However, at the $1.25 rate, these costs were actually $5.5 million during fiscal year 1989, resulting in constant dollar savings of over $2 million during that fiscal year. Air Force overall claims costs for these shipments declined beginning in 1987 even though DOD paid carriers a separate charge in addition to transportation charges for the increased liability. In other words, the Air Force recovered more from carriers than the separate charge paid them for the increased liability. Table 2.3 illustrates the impact of increased recovery on Air Force constant dollar claims costs and shows (1) how overall Air Force costs for domestic shipments declined from about $8.7 million in fiscal year 1986 to between $4 million and $5 million in fiscal years 1990 and 1991, (2) how much the Air Force paid carriers for the increased liability, and (3) how the increased liability adjusted the percentage of overall claims costs paid by the Air Force and household goods carriers. Total Air Force claims costs declined by more than the 24-percent average reduction attributable to increased carrier liability because the total number of shipments, and consequently claims, also declined during this period. Based on the complete Air Force data, and Army and Marine Corps data that was available, we estimate that DOD would have saved $22 million during fiscal years 1987 through 1991 as the result of increased carrier liability on domestic household goods shipments if all the military services had performed as effectively as the Air Force. As previously mentioned, we could not determine the impact of increased carrier liability on DOD’s overall costs as accurately as we could for the Air Force because claims data for the other services was less complete. However, both DOD and carrier industry officials told us, and MTMC shipment data confirmed, that the physical characteristics of household goods shipments vary little between the military services. Air Force shipments averaged almost 32 percent of total DOD domestic household goods shipments by weight during fiscal years 1986 through 1991. We therefore estimated that if all the services had performed at the Air Force level, then total DOD savings at the $1.25 rate would have been slightly over $22 million (known Air Force savings of $7,037,001 divided by 0.3193). The amount of savings that can be realized from increased carrier liability depends on how effectively DOD recovers claims costs from carriers. Project REVAL estimated that under the increased liability, DOD recovery from carriers would average 78 percent of the amount of claims paid to servicemembers. We found, however, that DOD did not realize its full savings potential of $22 million during fiscal years 1987 through 1991 because the other military services were not as effective as the Air Force in recovering from carriers. Only the Air Force met REVAL expectations. Our analysis of Army and Marine Corps data for fiscal years 1988 through 1991 showed that these services did not meet this recovery standard, which brought the overall average DOD recovery rate down to about 65 percent of the amount of claims paid during this period. Table 2.4 illustrates these variations in military recovery effectiveness in actual dollars. To determine actual savings, we adjusted the $22 million downward to reflect the differences between the Air Force recovery rate and those actually achieved by the other services. We estimated that actual DOD savings attributable to increased carrier liability for domestic shipments during fiscal years 1987 through 1991 was $18.9 million, or almost $3.2 million less than it would have been if all the services had recovered as effectively as the Air Force. The impact of variances in military service recovery effectiveness is discussed further in chapter 4. In commenting on our 1988 report, carrier industry officials objected to DOD’s implementation of the increased liability program in part because they believed the DOD recovery rate would increase to as much as 95 percent. At this rate, almost all claims costs would be passed to carriers. As shown in table 2.4, this did not occur. Instead, because of varying service recovery effectiveness, actual carrier claims costs were lower than predicted by REVAL, and far lower than carrier estimates. Increased carrier liability does transfer a greater portion of claims costs to carriers, but DOD still pays more than half of household goods claims costs. For example, under the $0.60 rate in fiscal year 1986, the Air Force recovered from carriers about 30 percent of the amount of claims paid to servicemembers. Under the $1.25 rate after fiscal year 1987, Air Force recovery from carriers on domestic shipments increased to an average of almost 78 percent of the amount of claims paid, but carriers also received payments for the additional liability through separate charges. Consequently, under increased liability during fiscal years 1988 through 1991, the carriers actually paid a maximum of 46 percent of Air Force claims costs (see table 2.3). Carrier industry representatives also told us they believed that even if DOD claims costs declined under the $1.25 rate, overall DOD costs might still have increased over the levels experienced under the $0.60 per pound per article rate if carriers had increased their transportation rates to compensate for the increased liability. However, we found that DOD household goods program net costs for domestic shipments (transportation costs plus claims costs less recoveries) also declined after the $1.25 rate was adopted in 1987. Table 2.5 illustrates how program costs declined from the level experienced before increased carrier liability was implemented in 1987. Declining program costs cannot be attributed solely to increased carrier liability. Transportation rates are influenced by many factors other than claims costs, such as insurance, competition, and individual carrier costs related to personnel, equipment, and facilities. Our analysis of DOD claims data by individual carrier confirmed that many carriers, especially those with high rates of loss and damage, were encountering claims costs higher than the compensatory revenues they received for the increased liability. These carriers could have compensated by raising their transportation rates. However, carrier industry officials told us that carriers had instead chosen to absorb these costs. They said the carrier industry was overbuilt, and that carriers in general were reluctant to increase transportation rates for fear of losing DOD business to other carriers with unchanged or lower rates. Both intense carrier competition and increased carrier liability therefore appear to have contributed to lowered DOD net program costs. We could not determine to what extent lowered net program cost was due to reduced claims costs versus other factors. These other factors vary between carriers and are difficult to measure. It is clear, however, that net domestic program costs declined after DOD implemented increased carrier liability, and that reduced claims costs contributed to this decline. One of DOD’s objectives in increasing carrier liability was to increase carrier incentive to prevent loss and damage to household goods. We found that while the percentage of domestic household goods shipments incurring servicemember claims changed very little under increased carrier liability, the average amount of claim paid declined. Our analysis of Air Force shipment and claims data showed that claims were paid on 20.7 percent of this service’s domestic shipments under the $0.60 rate in fiscal year 1986. After the $1.25 rate was implemented in 1987, the Air Force claims frequency rate showed little change, ranging from 19.3 to 22.7 percent between fiscal years 1987 and 1991. The combined Army, Air Force, and Marine Corps claims frequency rate was similar, ranging from 18.3 percent to 21.8 percent during fiscal years 1988 through 1991. However, the average amount of claim paid the servicemember declined under increased liability. Expressed in constant fiscal year 1993 dollars in order to adjust for the effects of inflation, the average amount of claim paid by the Air Force dropped from $821 in fiscal year 1986 to $637 by fiscal year 1991, and similar trends appear to have occurred for the Army and Marine Corps claims. Table 2.6 illustrates the declines in average amount of claim paid for the services we reviewed. Increased liability appears to have provided carriers with increased incentive to improve performance. Carrier industry officials cited a variety of actions they had recently taken to reduce their claims costs. These included holding drivers more responsible for any damage, improving packing and inventory techniques and materials, and providing training and offering incentives designed to improve performance and reduce shipment damage and loss. Although such improvements do not appear to have had an appreciable impact on claims frequency, they are likely to have been a significant factor in reducing the extent of the damage occurring on shipments with claims. This in turn has contributed to reductions in claims costs to both carriers and DOD. MTMC should now eliminate the separate charge paid carriers for the increased liability on domestic shipments. Carriers have had 7 years of claims cost experience under increased liability, and should therefore now be able to compensate for the loss of the separate charge by adjusting their transportation rates. Because none of the military services recovered more than an average of 80 percent of the amount of claims paid in any of the fiscal years we reviewed (see table 2.4), DOD would still absorb at least 20 percent of household goods claims costs. DOD should bear some responsibility for claims costs since DOD, rather than carriers, settles servicemember claims. The expectations for increased liability set by DOD have in part been achieved. DOD domestic household goods claims costs have declined, carrier performance is somewhat improved, and overall program costs are down. However, claims costs have not declined as much as expected because of varying military service effectiveness in recovering these costs from carriers. We believe the increased carrier liability at the $1.25 rate was fair and equitable to both DOD and the carrier industry for the period we reviewed. Under the $0.60 per pound per article rate, DOD bore more than 70 percent of claims costs, and carriers had little incentive to improve their performance. Under the increased liability, DOD still paid more than half the cost of servicemember claims for shipment loss and damage while reducing overall government costs and encouraging improved carrier performance. Carriers also received financial compensation for additional costs incurred as a result of increased liability. Carriers have now gained experience with increased liability claims costs, and should be able to build these costs into their transportation rates. Therefore, MTMC should eliminate the separate charge paid carriers for the increased liability on domestic shipments. We recommend that the Secretary of Defense direct the Commander of MTMC to eliminate the separate charge now paid to carriers to compensate them for increased risk on domestic shipments. DOD concurred with our findings and recommendation. DOD’s comments indicated that by March 31, 1995, the Office of the Secretary of Defense will direct the Commander, MTMC, to eliminate the separate charge now paid carriers to compensate them for increased risk on domestic shipments. This change is scheduled to take effect on domestic shipments beginning November 1, 1995. In commenting on this report, the American Movers Conference (AMC) and the Household Goods Forwarders Association of America, Inc., disagreed with our findings and recommendation. They said that the inflation index we used—the Consumer Price Index—overstated the actual amount of inflation and resulted in an overstatement of the amount of savings accruing to DOD as the result of increased carrier liability on domestic household goods shipments after fiscal year 1987. The AMC further noted that since there was no decrease in the frequency of household goods claims on domestic shipments after 1987, the primary impact of the increase in carrier liability was to transfer the cost of these claims from DOD to the household goods industry. We used the Consumer Price Index to adjust for inflation and enable dollar comparisons over fiscal years 1986 through 1991 for two primary reasons. First, in order to avoid just such methodology disputes, during the design phase of this assignment, we sought and obtained carrier industry review and concurrence with our analysis methodology, including the use of the Consumer Price Index as the appropriate index for such comparisons. Carrier industry officials suggested changing this index only after seeing the results of our analysis. After reviewing the alternate index proposed by the AMC, we are not convinced that AMC’s index provides a more accurate estimate than the index we used. The AMC maintained that the Consumer Price Index should not be used because it contains many components that have no direct bearing on claims costs, and instead proposed a combination of Consumer Price Index components that it claimed were more directly related to claims costs. However, while the overall Consumer Price Index does not match the specific makeup of household goods claims, neither does the index proposed by the AMC. It still excludes certain items and costs frequently found in household goods claims such as bicycles, music equipment, and photographic equipment. Also, the weighted values used by AMC’s index are based on the pattern of consumer expenditures rather than claims. It is therefore unclear whether or to what degree AMC’s index, or any similar index, might be more appropriate for tracking household goods claims costs. Furthermore, the overall Consumer Price Index is readily available in published form and is widely accepted as the appropriate standard for establishing constant dollar comparisons. However, while the overall Consumer Price Index remains a generally accepted standard for constant dollar comparisons, we acknowledge the existence of controversy over whether this index overstates inflation. AMC acknowledged in its comments that even using their index, increased carrier liability resulted in DOD claims costs reductions of 5.2 percent instead of the 9 percent we reported. Regardless of which index is used, increased carrier liability still resulted in reduced DOD claims costs. AMC’s comments provided numerous additional reasons and data analyses to support further disagreement with the results of our analysis. The AMC cited analyses from our previous reports as the source of some of this data. In fact, the source of the preponderance of this data was AMC comments to our prior report, not work performed by us. Also, this data was based on MTMC data shown to be inaccurate by our current analysis. Furthermore, we disagree with the appropriateness of various technical aspects of the methodology AMC uses in reaching many of its conclusions. The AMC also suggested that fiscal year 1991 data be removed from our analysis because of differences in certain claims data for this fiscal year compared with similar data for other fiscal years. AMC’s comments cited lower claims cost recovery ratios for the Air Force and the Marines in fiscal year 1991 than in any of the prior fiscal years we evaluated, and suggested that some claims data might not have been included for fiscal year 1991 due to late claims filing times. We believe these fluctuations are within normally expected ranges and that they do not warrant exclusion of the fiscal year 1991 data. For example, while Air Force officials told us that Operation Desert Storm affected claims personnel priorities, they also told us that claims personnel shortages and conflicting priorities were generally likely to affect their ability to consistently maintain an 80-percent recovery rate. All the services confirmed that our analysis accurately reflected their shipment and claims data for the period reviewed. We also previously investigated the drop in Marine recovery effectiveness from 63 percent in fiscal year 1990 to 46 percent in fiscal year 1991 that AMC cited in its comments. We found that due to a Marine Corps claims processing backlog, some Marine data had not been included in our initial analysis and we modified our report accordingly. However, our review of the missing data revealed it was little different from other Marine claims data and that it was of insufficient volume to affect our analysis results. We agree with AMC’s comment that increased carrier liability has transferred a greater portion of DOD household goods claims costs to the carrier industry. Even after carrier liability was increased for domestic shipments in 1987, DOD paid the majority of these costs—the percentage of claims costs actually paid by carriers ranged from only 29 percent to a high of 46 percent annually. Removing the compensatory payment as we recommended would transfer more, but not in excess of 80 percent, of household goods claims costs to the carrier industry. We believe the carrier industry, not DOD, should be responsible for damage and loss occurring while the shipments are under the control of carriers. Furthermore, increased carrier liability provides carriers with increased incentive to find new ways to prevent or reduce shipment damage and loss. Poorly performing carriers would probably be forced to increase their transportation rates, thus becoming less competitive for DOD business. In summary, we believe DOD should reasonably expect carriers to deliver shipments in the same condition as when they were submitted for shipment. Costs associated with any damage should be borne by the party causing the damage. Carriers should include costs for loss or damage inherent in moving household goods in their transportation rate bids just like they include other costs, such as packing, unpacking, linehaul, and insurance. Also, we want to make it clear that MTMC does not establish a ceiling on carrier transportation rate bids as implied in AMC’s comments. It does establish a standardized baseline rate against which carriers are expected to bid. Carriers can and do bid both above and below this baseline rate. The only restraint to any rate increases is competition among the carriers themselves. At the $0.60 per pound per article carrier liability rate, DOD absorbed a disproportionate share of the claims costs resulting from loss and damage to international household goods shipments, and carriers had only limited incentive to improve their performance. Our evaluation of DOD shipment and claims data indicates that adoption of the $1.25 valuation rate for international shipments would be an effective way to lower program costs and reduce the level of loss and damage to servicemembers’ household goods. However, adoption of DOD’s proposal to implement the $1.25 liability rate without any type of compensatory payment or premium might cause a major disruption in the carrier industry. Implementation of the $1.25 rate would therefore need to be accompanied by a compensatory payment for a limited period. This would give carriers an opportunity to gain experience under the higher claims liability, enabling them to include claims cost increases in future transportation rates. DOD officials told us that their proposal to increase carrier liability to the $1.25 rate for international shipments was made for the same reasons it was implemented domestically (see ch. 1). They said that reducing damage to household goods shipments was important because it affected servicemember morale, quality of life, and retention rates. In addition, they said that loss and damage, and consequently, the average amount of claim, was greater for international shipments than for domestic shipments. They cited instances of careless dockside handling of shipments, said that shipment pilferage and theft was a substantial problem in several overseas regions, and stated that the $0.60 per pound per article carrier liability rate in effect since 1967 provided little incentive for carriers to correct these problems or otherwise improve their performance. According to these officials, standardization of carrier liability would also simplify claims adjudication and recovery procedures. The primary problem with continuing carrier liability on a per pound per article basis is that it limits carrier liability on the basis of an item’s weight rather than its value. DOD officials expressed concern about the costly impact of paying servicemember claims according to an item’s depreciated value or repair cost, while recovering claims costs from carriers on the basis of item weight. For example, under this liability system DOD is unable to recover reasonable repair or replacement costs for low-weight, easily damaged items such as stereos, televisions, compact disks, and other high-value items that are also frequently the targets of shipment pilferage. We believe that implementing the $1.25 rate on international shipments will improve carrier performance and reduce program costs. Our evaluation of DOD domestic shipment and claims data for household goods moved during fiscal years 1986 through 1991 showed that after implementation of the $1.25 rate, carrier performance improved and DOD’s overall program and claims costs for these shipments declined (see ch. 2). These patterns contrast with those for international shipments during the same period. At the $0.60 per pound per article rate, international shipments experienced a gradual increase in damage and loss frequency, and incurred relatively high and generally increasing claims costs. Our analysis of DOD claims data for international shipments revealed that both the frequency of loss and/or damage to international shipments and the average amount claimed increased during fiscal years 1988 through 1991. Of the 150,345 overseas containerized household goods shipments moved by the Air Force, the Army, and the Marine Corps at the $0.60 rate in fiscal year 1988, loss and damage claims were filed on 30,657 (20.4 percent). The claims frequency rate then increased to 22.4 percent, 23.5 percent, and 23.7 percent, respectively, during fiscal years 1989 to 1991. While this increase is a relatively moderate 3.3 percentage points for the period, it differs from the domestic claims frequency rate in that it is consistently increasing. The average amount of claim paid for these shipments also increased overall during this period. After adjusting for inflation (converting to constant fiscal year 1993 dollars), the average amount of claim paid per hundredweight (per 100 pounds shipped) for these shipments was $6.22 in fiscal year 1988, and $6.39, $6.65, and $6.26, respectively, during fiscal years 1989 to 1991. At the $0.60 rate, DOD claims cost recovery from carriers has been limited on both domestic and international shipments. For example, the Air Force paid servicemembers over $9.4 million for claims on fiscal year 1986 domestic shipments, and recovered (at the $0.60 rate) over $2.8 million (29.9 percent) from carriers. Air Force recovery on fiscal year 1986 containerized international shipments at the same $0.60 rate was substantially less—about $1.8 million of the $7.2 million paid for claims, or 24.9 percent. Air Force recovery at the $0.60 rate for unaccompanied baggage shipments, which comprise several additional types of international household goods shipments, was less still—$367,555 of the $1,760,212 paid for claims, or 20.9 percent. Recovery activities for the other military services were less effective than those of the Air Force for all types of shipments during the period we reviewed. Table 3.1 shows transportation and claims costs for Air Force, Army, and Marine Corps containerized international shipments at the $0.60 rate during fiscal years 1988 through 1991. As shown by table 3.1, on average only about 15 percent to 21 percent of the amount of claims paid was recovered at the $0.60 rate. Unaccompanied baggage recovery averaged only 14.7 to 17.6 percent of the claims paid during this period. We could not determine recovery rates for the Navy because its claims data is not computerized. However, MTMC officials told us that they believed Navy recovery performance was unlikely to be substantially different from the average of the other military services. The $0.60 rate usually results in the government bearing more than 80 percent of the costs associated with claims for shipment loss and damage on international shipments. We believe this level of carrier liability is too low to provide the necessary financial incentive to improve carrier performance. During work on a prior report, carrier industry officials told us that before implementation of the $1.25 rate for domestic shipments, carrier liability at the $0.60 rate was so limited that claims recovery attempts were often not contested, and some carriers did not even have claims departments. During our current review, industry officials told us that increased liability levels and other factors have forced carriers to pay much more attention to both avoidance of shipment damage and loss and claims adjudication. Changing carrier liability to the $1.25 rate as proposed by MTMC should reduce both DOD claims costs and overall program costs. Recoveries from carriers would likely increase in a fashion similar to that experienced after the adoption of this rate for domestic shipments in 1987. However, MTMC’s proposal to increase carrier liability in this fashion without any type of compensatory payment or premium could unfairly shift increased claims and other costs to carriers and could cause substantial industry disruption. Implementation of this increased liability rate would therefore need to be accompanied by a compensatory payment to carriers. The amount of increased carrier costs and subsequent government savings would vary depending on the compensatory rate used and assumptions regarding the effectiveness of military service recovery activities. Adoption of the $1.25 rate for DOD international shipments would probably cause claims costs for these shipments to decline in much the same fashion as did domestic shipments. DOD claims officials told us that claims adjudication for international shipments is essentially the same as that for domestic shipments, except for the carrier liability rate. For example, both involve the same types of household goods, the same claims adjudication and payment process for the servicemember, and the same recovery process from carriers. We found the average amount of claim paid is higher for international shipments, but military service recovery activities are less effective on international shipments than they are on domestic shipments. The main difference occurs in the carrier liability rate, or determining how much of the amount paid is to be recovered from the carrier. Any increase in carrier liability would reduce DOD claims costs because the overall amount of DOD recoveries from carriers would then increase. However, the carrier industry maintains that low liability rates, such as the $0.60 rate, might result in lower net program cost to the government because low liability rates would allow carriers to charge lower transportation rates, which would more than offset the high DOD claims payment costs associated with this rate. Carrier industry representatives said that increased carrier liability might not reduce net program costs because carriers would be forced to increase transportation rates to cover their increased liability costs. However, transportation rates did not increase enough to prevent a net decline in program costs after implementation of the $1.25 liability rate for domestic shipments. As discussed in chapter 2, our analysis of household goods shipment and claims data showed that both DOD claims costs and net program costs declined after the implementation of the $1.25 rate for domestic shipments in 1987, resulting in savings totaling about $18.9 million during fiscal years 1987 to 1991. DOD and carrier industry officials told us that domestic shipment transportation rates have not increased substantially since 1987 because the carrier industry is overbuilt and competition for DOD business is fierce. They said this is partially due to recent reductions in both the size of the U.S. military and the number of personnel stationed overseas. Some carrier industry officials told us, and our analysis also indicates, that domestic carriers absorbed a portion of the additional costs associated with increased liability rather than becoming less competitive for DOD business through increased transportation rates. We could not determine what might happen to international transportation rates under the $1.25 liability rate. Many factors other than liability could have an impact on these rates. For example, carrier industry officials told us that carrier transportation charges did not increase with the implementation of the temporary $1.80 per pound per article rate, due largely to declining steamship transportation rates. We believe it appropriate for DOD to realize financial benefits occurring as the result of intense carrier competition as long as carriers have the opportunity to adjust the rates they charge DOD for transporting household goods shipments. Implementation of the $1.25 rate for international shipments could result in carrier industry disruption if it is not accompanied by additional payments to carriers in compensation for the increased liability. MTMC did not make provision for a compensatory rate when it proposed the $1.25 rate for international shipments. Most of the carriers we interviewed told us they would have difficulty adjusting their international shipment transportation rates to cover the cost of their increased liability. Many of them perform only DOD international shipments, and therefore have no commercial or domestic experience using the $1.25 rate. They said that overestimation of the costs they might experience at the $1.25 rate would cause them to set transportation rates too high, making them noncompetitive for DOD business. Conversely, underestimation would result in transportation rates insufficient to cover claims costs. Either could lead to financial losses or bankruptcy. They also cited other uncertainties, such as how such a change in liability might affect insurance and other costs. We believe it is a normal business practice for the carrier industry to estimate its costs and determine its transportation charges to provide whatever service is needed by DOD. However, we believe DOD should compensate carriers in exchange for their added risk. Carrier industry and DOD officials told us the financial status of many carriers is weak due, in part, to military reductions in force and intense competition for existing business. Compensatory payments would provide a financial buffer during the period when carriers were adjusting to the new liability rate, thus reducing the potential for carrier bankruptcies and subsequent stranding of en route shipments. Adequate claims data to evaluate the impact of increased liability on international shipments should be available within 2 to 3 years from the implementation date. By then carriers will have had adequate claims experience under the new rate to accurately estimate their claims and other costs associated with the increased liability, and should be fully capable of adjusting their transportation rates as needed. Also, MTMC could then evaluate the impact of the increased liability and determine whether to continue compensatory payments to carriers. We could not, with certainty, determine a fair and adequate separate charge to compensate carriers for their increased liability, for two reasons. First, because carrier performance levels vary, establishing a single separate charge that is fair and adequate for overseas carriers requires policy judgments about the appropriate performance level to be expected from carriers. Second, the impact of this proposed increase on carrier performance, and consequently on the number and amount of claims submitted by servicemembers, cannot be accurately predicted. However, we did develop an expected impact of increased liability on international shipments, based on the available shipment and claims data and certain aspects of increased liability’s impact on domestic shipments. We evaluated carrier performance data for all international household goods shipments moved by the top 50 carriers by total weight shipped during fiscal years 1989, 1990, and 1991. These carriers moved 75.2 percent of all containerized international shipments moved by DOD in fiscal year 1989, 76 percent of those moved in 1990, and 71.9 percent of those moved in 1991. We found that the average level of loss and damage to these shipments varied according to carrier. For example, the percentage of these shipments incurring claims during these 3 fiscal years ranged from slightly under 9 percent for the best-performing carrier to over 30 percent for the worst, with the average varying between 22 percent and 24 percent for each fiscal year. The average amount of claim paid by DOD to the military servicemember also varied widely by carrier, ranging from $580 for the best-performing carrier to more than $1,200 for the worst, with averages ranging from $779 to $898. Such variations in carrier performance contributed to our difficulty in determining a separate charge that would be fair and adequate for all carriers. A high separate charge would result in significant revenue increases for the better-performing carriers, while revenue for a low performer would be inadequate to cover costs associated with increased liability. To determine an appropriate separate charge for the $1.25 rate, an evaluation must first be made of the rate’s expected impact on the amount DOD would recover from carriers. We believe that application of the $1.25 rate to international shipments would have a similar impact on the percentage of the amount of claims paid recovered from carriers as it did for domestic shipments. DOD claims officials told us that no differences exist between domestic and international shipments with regard to the procedures used for determining the amount of the claim to be paid to the servicemember—only the method for calculating the carrier’s liability is different. Our analysis of Air Force, Army, and Marine Corps claims data showed that application of the $1.25 rate to domestic carrier liability caused DOD recovery from these carriers to increase from less than 30 percent of the amount of claim paid to an average of about 65 percent. Among the services, after implementing the $1.25 rate, only the Air Force achieved and maintained the expected recovery level of 78 percent of the amount of claims paid. We assumed a recovery effectiveness rate of 69 percent of the amount of claim paid for developing our compensatory rate estimates for overseas shipments. We chose this recovery rate rather than the 78 percent used by Project REVAL and demonstrated by the Air Force on domestic shipments because (1) it was the highest combined single-year recovery rate achieved by the services on domestic shipments under increased carrier liability during fiscal years 1988 through 1991 (see table 2.4) and (2) military service recovery for overseas shipments is less effective than for domestic shipments. We determined that, on average, an appropriate compensatory rate could range from $1.50 to $2.04 per $100 of shipment valuation, depending on the criteria used. For example, our computerized analysis of claims data showed that a compensatory rate of $1.50 would result in carriers paying 37 percent of total claims costs and DOD 63 percent for a government savings of $5.7 million per year if carriers performed like they did during fiscal years 1989 through 1991. However, at this rate, almost no carriers would have sufficient compensatory payments to cover their claims costs, and consequently would have to raise their transportation charges, improve their performance, or absorb the loss. At $1.69, about 5 of the 50 carriers we reviewed would have sufficient revenues to cover their claims costs, carriers would pay 32 percent, and DOD 68 percent of claims costs, with DOD savings of $4.4 million per year. At $2.04, at least 28 percent of the carriers we reviewed would have sufficient revenues to cover claims costs, and DOD savings would average almost $2 million per year. Table 3.2 shows the impact of these compensatory rates on DOD costs for international household goods shipments. Carriers whose claims would not be fully covered by the separate charge would have to improve their performance, absorb the loss, or cover their claims costs through higher transportation rates. Carriers with continued poor performance would probably be forced to increase their transportation rates, thus becoming less competitive in obtaining contracts for the movement of DOD household goods shipments. Carrier selection for DOD business would then be more closely aligned with the cost and quality of the service rendered. These calculations were not adjusted to give consideration to other carrier costs that could change as the result of the increased liability (such as insurance premiums and administrative costs). These costs vary by carrier and are difficult to substantiate and measure. DOD did not implement the $1.25 rate for international shipments in October 1993 as proposed. Instead, it increased carrier liability on these shipments on an interim basis to $1.80 per pound per article, pending the completion of our review. We could not evaluate the impact of the $1.80 rate because insufficient time has passed to accumulate adequate shipment and claims data for such an analysis. The maximum effect of this increase would be to triple recoveries from carriers since the rate itself was tripled (3 X $0.60 = $1.80). However, carrier industry officials told us they expected this rate would result in recoveries being increased by a factor of 2 to 2.5 times current levels rather than tripling them. This would occur largely because the replacement or repair costs of some heavier, relatively low-cost items would be more than $0.60 times the item weight, but less than $1.80 times the item weight. The $1.80 does represent a substantial increase in carrier liability. If this rate does cause recoveries to increase by a factor of 2 to 2.5, then the amounts recovered from carriers would increase from a high of about 24 percent of the amount claimed on Air Force international shipments during fiscal years 1988 to 1991 at the $0.60 rate, to a maximum of about 48 to 60 percent under the $1.80 rate. By contrast, under increased liability at the $1.25 rate (with a compensatory payment to carriers of $1.69 per $100 shipment valuation) we estimate carriers would be responsible for about 32 percent of shipment loss and damage costs during the 3-year introductory period if the military services improve overall recovery effectiveness to an average of 69 percent of the amount of claim paid. Removing the compensatory payment after 3 years would result in carriers then being responsible for about 69 percent of shipment loss and damage costs. Whether the $1.80 rate will reduce overall government costs depends on whether and to what degree carriers might increase their transportation rates to obtain additional revenue with which to pay increased claims costs. According to DOD and industry officials, transportation rates bid by the carriers did not increase with the implementation of $1.80 per pound per article liability. However, carrier representatives told us this was due to major decreases in the steamship transportation rates paid by carriers. Carrier industry officials are generally opposed to the $1.25 rate proposed by DOD. They believe this rate would be inappropriate for international shipments because (1) no determination has been made that the $1.25 liability rate actually reduces program costs, (2) the international and domestic programs are so different as to prevent meaningful comparison, and (3) changing carrier liability to the $1.25 rate would result in severe industry disruption. First, carrier industry officials have acknowledged that increasing carrier liability would reduce DOD claims costs. But they questioned whether this would result in a reduction in overall program costs. They said that limiting carrier liability allowed carriers to keep transportation rates low, and that these lower rates might well offset any savings in claims costs. Overall government costs thus might be lower at $0.60 per pound per article than with a higher liability rate. However, our analysis showed that overall government costs on domestic shipments were lower under increased liability (see ch. 2). Furthermore, as noted in chapter 1, carrier industry officials told us that raising carrier liability for DOD household goods shipments was unfair because military servicemember claims for lost or damaged household goods are settled by the military services. In commercial practice, the carrier usually settles such claims directly with the owner. The carrier industry believes that military claims settlement is too generous, and results in excessive claims payments. Carrier industry officials told us that liability based on per pound per article tended to protect carriers from the high costs associated with military claims settlement, and that increased carrier liability would simply allow DOD to pass these payments on to carriers. Under increased liability DOD still pays more than half of all claims costs. Therefore, we believe carriers are compensated for any additional claims costs resulting from military claims settlement. Furthermore, our analysis of military service claims data showed that, on average, military claims offices authorize payment for about 66 percent of the amount claimed by servicemembers. Although this may be more than would be allowed under carrier settlement, we do not believe it results in excessive claims costs for carriers. Second, carrier industry officials told us that the risks associated with international household goods shipments are vastly different than those for domestic shipments. They said international shipments are usually in transit for much longer periods of time than domestic shipments, handled by more parties, and subjected to more loading, unloading, and other movement in transit (such as ship roll) than domestic shipments. They also said that other factors, such as limited control over shipping lines and destination agents, foreign laws and customs, and varying currency exchange rates all cause international carriers to have much less direct control over shipments for which they are liable. We agree that risks and costs are generally higher for overseas shipments, but these costs vary between carriers and routes. Also, compensatory payments for international shipments could be set higher than those for domestic shipments ($1.50 to $2.04 for international shipments compared to $0.64 plus 10 percent of storage in transit costs for domestic shipments). In any event, carriers continue to have the option to adjust their transportation rates to compensate for such costs. Third, carrier industry officials told us that many overseas carriers would be unable to develop accurate claims cost estimates under the $1.25 rate. Because carrier liability for overseas DOD shipments has been based on a per pound per article basis, many carriers have had no claims experience with the $1.25 rate. This is particularly the case for carriers we interviewed that handle only DOD international shipments. Overestimation of their claims costs under the new rate might cause carriers to raise their transportation rates too much and consequently lose government traffic to competing carriers. On the other hand, underestimation could result in inadequate revenues to cover costs. Carrier industry officials also told us that the carrier industry was overbuilt and financially stressed, that the number of DOD overseas shipments was declining, and that making major changes now in the way carrier liability is computed for international shipments could lead to many carrier bankruptcies, which in turn result in disruption of both the industry and DOD operations. They said that any increase in carrier liability for these shipments should be kept on a pound-per-article basis, and that DOD should collect and review claims at the current temporary carrier liability rate of $1.80 per pound per article before making any changes. We believe the payment of a compensatory rate for at least 3 years would avoid industry disruption and allow carriers adequate time to obtain sufficient claims experience under increased liability to enable adjustment of their transportation rates. After 3 years, MTMC and the military services should also have sufficient claims data to determine what level of carrier liability is desired and whether the compensatory rate should be adjusted or terminated. The maximum carrier liability rate of $0.60 per pound per article for international household goods shipments is too low. At this rate, carriers have limited incentive to improve performance, and the government bears a disproportionate percentage of household goods claims costs. The $1.25 rate would more fairly allocate claims costs between DOD and the carriers. However, industry disruption may occur unless this rate is accompanied by a temporary compensatory payment. We recommend that the Secretary of Defense direct the Commander of MTMC to increase carrier liability to the $1.25 rate on international household goods shipments after providing notice to carriers through the Federal Register. However, we also recommend that this rate be accompanied by a compensatory payment for 3 years, or until sufficient claims data is available to permit carriers to file transportation rates that will adequately compensate them for the increased risk they would assume. DOD concurred with our findings and recommendation. Its comments indicated that the Secretary of Defense will direct MTMC to increase carrier liability on international household goods shipments made on or after October 1, 1995. MTMC subsequently notified carriers through the Federal Register, dated February 16, 1995, that as of October 1, 1995, it intended to increase carrier liability on international shipments to the $1.25 rate with a compensatory rate of $1.28 per $100 of shipment valuation. Both the AMC and the Household Goods Forwarders Association of America, Inc. (HHGFAA), disagreed with our findings and recommendation. In commenting on this report, the AMC said that carrier liability should not be increased to the $1.25 rate on international shipments because nothing was achieved by increasing carrier liability on domestic shipments except that liability for shipment loss and damage was transferred from DOD to the carrier. The AMC said that if the $1.25 rate is implemented for international shipments, MTMC should pay a valuation charge (compensatory rate) of $2.31 per $100 of shipment valuation. It further said that if MTMC is unwilling to pay this level of compensation, then carrier liability should be returned to the $0.60 per pound per article rate. We believe the $2.31 compensatory rate proposed by the AMC is too high and would provide little incentive for carriers to reduce shipment damage and loss. This rate would cause carriers to pay higher claims costs initially, but would also result in DOD reimbursing them for the added cost. The overall financial impact on both DOD and the carrier industry would thus remain unchanged. The better performing carriers would realize windfall profits, the average carrier would break even, and only the worst performing carriers would have incentive to improve. We believe the compensatory rate should be designed to fully compensate only the better performing carriers. Other carriers would have to improve their performance, increase their transportation rates, or absorb the loss. Changing carrier liability is pointless unless it will have a significant monetary impact on both DOD and the carriers. Woven throughout carrier industry comments is the theme that increasing carrier liability actually does little more than to transfer claims costs to the moving industry. This is exactly what DOD has attempted to do. DOD has historically borne a disproportionately large share of claims costs. Increasing carrier liability would transfer a greater portion of the costs associated with damaged and lost household goods to the industry responsible for the problem. Even under increased liability, DOD would still be paying at least 20 percent of claims costs. The carrier industry further stated that carriers should be allowed to settle claims for loss and damage directly with servicemembers. They noted that it is common commercial practice for the carrier to settle claims directly with the shipper, and that DOD claims settlement is too generous. DOD officials told us that carriers had been allowed to settle claims directly with servicemembers in the past, but that this practice had been discontinued because DOD believed many settlements had been unfair. Carriers currently can and often do offer servicemembers cash for losses and damage at the time of shipment delivery in an attempt to avoid the DOD claims settlement and recovery process. Furthermore, adopting commercial claims settlement practices would be much more appropriate if commercial practices were also applied in selecting carriers for DOD shipments. The current process for awarding DOD business to carriers is regulated in such a way as to place more emphasis on low transportation rates and spreading DOD business over a large number of carriers than it is toward awarding more shipments to those carriers providing the best service and value for the cost. In any event, carriers can fully compensate for any increased costs associated with DOD claims settlement practices by increasing their transportation rates. The HHGFAA objected to any increase in carrier liability for international shipments. It said carrier liability for these shipments should not be increased, primarily because we had not evaluated the impact of (1) the October 1993 increase in carrier liability from $0.60 per pound per article to $1.80 per pound per article, (2) MTMC’s Total Quality Assurance Program, or (3) the High Risk Protection Program implemented by the carrier industry. The HHGFAA also said that domestic and international shipments are so different that experience with the $1.25 rate on domestic shipments should not be used as a basis for applying this liability rate to international shipments. The HHGFAA suggested leaving the liability rate at $1.80 per pound per article until such time that we could perform a statistical evaluation of this rate’s impact on claims costs. MTMC increased the carrier liability rate on international shipments from $0.60 per pound per article to $1.80 per pound per article as an interim measure pending the outcome of our study. It was intended only to give temporary relief to DOD, which had been bearing a disproportionate share of claims costs for years. As discussed in this chapter, we could not review the impact of the $1.80 per pound per article rate because inadequate time has passed to accumulate shipment and claims data to make a meaningful analysis. We did generally discuss the potential impact of the $1.80 rate. However, we are opposed to the retention of carrier liability based on a per pound per article rate because it results in carrier liability being based on a lost or damaged article’s weight rather than its value. As stated earlier, this has the costly impact of DOD paying servicemember claims on the basis of an item’s value or repair cost, while recovering from carriers on the basis of item weight. Carrier liability for high-value, low-weight items is greatly limited on the very items that tend to be easily damaged or are often the target of shipment pilferage. During this review we did do some audit work regarding MTMC’s Total Quality Assurance Program and the carrier industry’s High Risk Protection Program. In this chapter we acknowledged that these programs had potential for affecting claims costs. However, we could not determine whether or to what degree they actually impact these costs because both were implemented only recently and there has been inadequate time to accumulate the claims data needed for such an evaluation. Furthermore, the work we performed revealed that MTMC’s Total Quality Assurance Program is being affected by several implementation problems, and that its effectiveness and appropriateness as a tool for assuring quality moves is presently unclear. We agree with the HHGFAA that risks and costs are generally higher for international shipments than for domestic shipments. However, the carrier, not DOD, is still responsible for loss and damage occurring while household goods are under its control, including handling by destination agents or other subcontractors used by the carrier. The carriers should build the cost of such risks into their transportation rates. Furthermore, while differences may exist with regard to the amount of carrier risk associated with domestic as opposed to international shipments, the process for adjudicating claims for loss and damage is the same. The issue is what liability rate should be applied. If DOD decides that it should assume more liability for loss and damage and the carriers less, then it should do so by lowering the $1.25 rate to $1.10 or some other level rather than retaining a liability system based on the weight of the items shipped instead of their value. Increasing carrier liability should result in reduced DOD costs and improved carrier performance. However, several problems affecting DOD’s household goods programs need to be addressed for increased liability to achieve its intended effectiveness. These include the lack of shipment and claims data necessary for managing the household goods program, variances in cost recovery effectiveness among the military services, questionable performance bond and insurance collection procedures, and an unnecessarily long statutory period for filing household goods claims. MTMC needs accurate household goods shipment and claims cost data to meet its responsibilities for overall household goods program management, determine cost effectiveness, and make program changes as needed. However, MTMC’s household goods program database has major problems that prevent DOD officials from obtaining adequate information to effectively manage many aspects of this program. MTMC officials do not have adequate information with which to evaluate individual carrier performance. MTMC obtains periodic reports from the military service Judge Advocates General that include data on the number and amount of claims paid for loss and damage to household goods shipments, and stores this information in computerized data banks. We compared computerized household goods claims data we obtained directly from the military service Judge Advocates General with that stored by MTMC for shipments moved during fiscal years 1986 through 1991. We found that MTMC claims data has major omissions. For example, the MTMC database was always missing at least 28 percent of the claims paid and claims recovered data on Air Force shipments made between fiscal years 1986 and 1991, and at least 40 percent of similar data for the Army between fiscal years 1988 and 1991. MTMC officials told us that most of the similar data from the Navy and the Marine Corps had not been submitted to MTMC in fiscal years 1990 and 1991. Officials from MTMC’s Traffic Management Analysis Division told us they considered MTMC’s household goods claims data so unreliable as to prevent meaningful analysis. We also found that MTMC does not track some costs essential to evaluating increased liability effectiveness. For example, in exchange for the increased liability on domestic household goods shipments, MTMC has since 1987 paid carriers a separate charge of $0.64 per $100 shipment valuation plus 10 percent of certain storage in transit charges. We found the MTMC database does not capture what costs are paid as a result of the storage in transit calculation, and therefore, MTMC could not determine its total costs for the increased liability. We had to review actual shipment records stored at the General Services Administration to determine these costs. We also found numerous other technical problems with portions of MTMC’s household goods database; these problems greatly limit MTMC’s oversight of the program’s performance characteristics and cost. After military service claims offices adjudicate and pay servicemember claims for loss and damage on household goods shipments, the military services attempt to recover these costs from carriers up to the extent of the carrier’s liability. Chapter 2 describes how military service recovery effectiveness varied under increased carrier liability on domestic shipments, with only the Air Force attaining the Project REVAL recovery goal of 78 percent of the amount of claims paid. As a result, DOD savings were about $3.2 million less than if all the military services had performed recovery as effectively as the Air Force. We found that recovery effectiveness varied between the services under other types of liability and shipments as well. Many of the carriers we visited told us that Air Force claims recovery was highly effective, and was attributable to its use of well-trained and knowledgeable personnel. They said the effectiveness of recovery activities performed by the other services was mixed. Our review of household goods shipment claims data confirmed that the Air Force generally asserted and recovered a higher percentage of the amount of claim paid than did the other military services, regardless of the type of carrier liability. DOD officials told us that the nature of household goods shipments varied little between the military services and that recovery effectiveness should also be very similar. However, military claims officials told us that problems such as personnel shortages, poor coordination between claims offices, claims backlogs, specific office performance problems, and lost or misplaced payments from carriers had affected some services in the past and that these had a negative impact on their recovery effectiveness. We believe these problems may continue to affect military service recovery activities. For example, one carrier told us that a recent review of their bank records revealed that 34 checks totaling $6,820 sent to DOD as the result of recovery actions between 1990 and 1993 had not been cashed. The same carrier also identified 13 more payments to DOD during 1994 totaling $1,895 that were still outstanding 2 to 4 months after check issuance. Effective recovery of claims costs by the military services is essential for increased carrier liability to fully meet its goals of reducing claims costs and increasing carrier incentive for preventing shipment loss and damage. This is particularly the case since DOD is paying carriers an additional separate charge in exchange for the increased liability on domestic shipments, and may do so on future overseas shipments. DOD therefore needs to place increased emphasis on recovery activities in order to achieve and maintain levels closer to those demonstrated by the Air Force. MTMC requires carriers to purchase cargo insurance before giving them approval to move DOD domestic household goods shipments, and both cargo insurance and performance bonds are required for approval to move DOD international shipments. DOD thus protects itself from losses and costs that might occur if a carrier goes bankrupt and does not complete a move as contracted, or completes the move and receives payment, but leaves claims for damage unresolved. Increased carrier liability and other facets of MTMC’s household goods programs are increasing the level of government funds at risk. However, past government actions to recover the cost of losses associated with carrier bankruptcies have often been inadequate. To ensure that the savings potential of increased liability is fully realized, we believe DOD needs to (1) place increased emphasis on bond and insurance collection from carriers and (2) review carrier bonding and insurance requirements. In chapter 2, we described how carriers are subject to potentially greater DOD claims costs under increased liability. DOD pays carrier transportation charges after shipment delivery. Most recoveries are made within about 2 years of shipment delivery, but military claims offices sometimes incur claims backlogs. By statute, servicemembers have 2 years in which to file claims against the government, and DOD has 6 years from shipment delivery to initiate recovery from carriers. More government funds are at risk under increased liability because (1) more is potentially recoverable and (2) carriers are also paid a separate charge for the increased liability shortly after shipment delivery. Although at least 61 carriers approved to move DOD shipments have declared bankruptcy or ceased to exist since 1980, government actions to recover costs incurred as a result of these bankruptcies and terminations have so far been inadequate. According to MTMC officials, the government sought reimbursement under only one performance bond—collecting $17,215 of the $36,014 owed by a bankrupt carrier in late 1993. MTMC officials told us that bond collections had never been effective, primarily because MTMC and the General Services Administration, which jointly shared collection responsibility, never established workable collection procedures. We could not determine the extent of funds lost. MTMC officials told us efforts were underway to improve bond collections and that MTMC would be solely responsible for its own bond collections in the future. Two additional bond collection attempts had been initiated, but none completed, by the time we concluded our review in November 1994. In addition to the increased liability, other factors arising from the highly competitive nature of the household goods carrier industry are increasing DOD’s financial risk, particularly on international shipments. First, both MTMC and industry officials told us that there are too many carriers competing for a decreasing amount of DOD household goods movement business. As of 1993, there were 1,227 domestic and 147 overseas carriers approved by MTMC for moving DOD household goods shipments. Declining levels of DOD shipments is increasing carrier competition, and forcing many carriers into a weak financial condition. Carrier industry officials told us that since claims may not be addressed until several years after a shipment is completed, many carriers do not set aside sufficient funding to cover claims, instead expecting to cover these costs out of their cash flow from new shipments. Declining shipment levels increase the likelihood of some carriers being forced into bankruptcy. Furthermore, we noted that many overseas carriers rely on winning DOD shipment contracts since they have no commercial household goods shipment business. Second, both carrier industry and MTMC officials acknowledged a growing tendency for some carriers to adopt a business strategy of going out of business. Some carriers have bid unusually low rates to win DOD business, received payment for moving a number of shipments, and then declared bankruptcy, leaving a large unpaid claims liability. Some of these carriers then reenter the business under a new carrier name, and apply for new MTMC carrier approval. Many of the carrier industry officials we interviewed told us they believed MTMC carrier approval requirements were too lax. MTMC officials acknowledged they rely heavily on bonding and insurance companies to evaluate the financial suitability of carriers before approving them for DOD shipments. Both MTMC and carrier industry officials told us that some disreputable carriers were taking advantage of weak MTMC approval and collection processes to employ business strategies of going out of business. They said that the low rates bid by such companies were making it difficult for reputable carriers to stay in business. This problem is exacerbated by MTMC’s provision of an incentive to the low-bidding carrier of as much as 30 percent to 50 percent of the traffic on international routes. This incentive is designed to reduce DOD transportation costs through increased carrier competition, and to reward the carrier bidding the lowest rate. However, DOD must ensure that adequate bonding and insurance levels and collection procedures are in place to cover shipment and liability costs in the event of carrier bankruptcy. Otherwise, the government is vulnerable to significant financial losses. For example, one carrier underbid all others on many routes for several years. At one point, this carrier was moving more than a fourth of all DOD overseas household goods shipments. According to DOD officials, this carrier then went bankrupt, leaving about $7 million in outstanding claims liabilities. DOD and insurance companies are presently involved in legal action regarding this matter, and DOD has not yet recovered any of these funds. Under the provisions of 31 U.S.C. 3721, federal employees have 2 years to file claims for loss and damage to personal property, including household goods. Prior to 1952, the statutory period was 1 year. The period was extended to 2 years to achieve consistency with other claims statutes. The 2-year period for filing household goods claims appears needlessly long. As discussed in our 1989 report on DOD household goods claims payment and recovery activities, the 2-year period contributes to claims management and adjudication problems, prevents carriers from making timely adjustments to their transportation rates, and causes increased government costs. Making timely adjustments to transportation rates will be even more important to carriers under increased carrier liability. Nearly all the carriers we visited said the statute needed to be shortened to a year or less. They told us that by contrast, claims on commercial shipments must be filed within 9 months of shipment delivery. DOD claims officials generally acknowledged that claims requiring more than 1 year to file usually involved servicemember procrastination. We analyzed Army and Air Force claims data for fiscal years 1988 through 1991 to determine the average amount of time required between shipment delivery and the filing of claims. We found that in each fiscal year, more than 60 percent of all claims filed were filed within 6 months of shipment delivery, and over 80 percent within 1 year of shipment delivery. For example, table 4.1 shows the amount of time in months between shipment delivery and claims filing for combined domestic and international household goods claims for the Air Force, Army, and Marine Corps in fiscal year 1991. DOD officials told us that the 2-year statute of limitations encourages some servicemembers to take longer than necessary to file their claims. This tends to increase the already long gaps between the time household goods shipments occur and the time claims data for evaluating costs and carrier performance is available. Claims processing and recovery by the military services often takes an additional 5 months or longer. Both DOD claims officials and carriers told us that long delays in filing household goods claims can result in claims settlement or recovery problems. Unnecessary delays in filing claims also exacerbate carriers’ problems in obtaining the claims recovery cost information they need to adjust their rates in a timely fashion. MTMC requires household goods carriers to bid on transportation rates for contracts to transport DOD household goods shipments 6 months prior to the beginning of the 6-month period these rates will be in effect. Increased carrier liability is resulting in increased carrier costs and consequently a greater need for timely adjustment of rates. As discussed in our 1989 report, delays in filing household goods claims increase government costs. Late-filed claims are generally more difficult to process and consequently increase administrative costs. DOD also cannot conduct recovery activities and reuse the funds thus obtained until servicemember claims are filed and processed. The availability of these funds and the amount of interest cost to the government thus depend largely on the amount of time required for servicemembers to file their claims. We therefore believe that this statute—insofar as it pertains to household goods claims—should now be changed to allow a maximum of 1 year for filing household goods claims. A draft of the proposed statutory changes is included in appendix IV. Increased carrier liability for loss and damage on household goods shipments increases the amount of money recoverable from carriers and consequently increases the importance of DOD activities and procedures designed to facilitate recoveries from carriers. DOD needs to address problems regarding household goods shipment claims data, reduce variances in military service recovery effectiveness, and review carrier bond and insurance levels and collection procedures in order to fully realize the savings potential offered by increased carrier liability. Increases in the amount of money recoverable from carriers also makes timely recovery of these funds even more essential so as to reduce government costs and enable carriers to adjust transportation rates on a more timely basis. We recommend that the Secretary of Defense take the following actions: Direct the Commander of MTMC to (1) correct inaccuracies in the MTMC household goods program database regarding claims payments and recoveries and (2) develop the procedures required to determine overall household goods program costs. Direct the military services to periodically report complete household goods claims and recovery data to MTMC. Direct the Secretaries of the Army and the Navy to increase the emphasis placed on household goods claims recovery so as to increase these military services’ recovery effectiveness to approximately the level demonstrated by the Air Force. Direct the Commander of MTMC to review household goods program carrier bonding and insurance requirements and collection procedures to ensure that these are adequate to protect government interests under increased carrier liability. We continue to believe that shortening the statute of limitations for filing claims for loss and damage to household goods shipments would facilitate claims adjudication, enable more timely carrier adjustments to transportation rates, and reduce government costs without imposing undue hardship on military servicemembers or civilian employees. Therefore, we again recommend that the statute—insofar as it pertains to household goods claims—be changed to limit the time allowable for filing claims to 1 year after the claim accrues. DOD concurred with our findings and recommendations to them. Subsequent to our fieldwork, MTMC began working with the military services to improve the completeness of its claims database. The Office of the Secretary of Defense will direct the Commander, MTMC, to ensure that all required program data is included in its database, and to review household goods program carrier bonding and insurance requirements and collection procedures. MTMC also began a DOD Personal Property reengineering process designed to develop a program that is simpler to administer, reduces the workload on transportation officers, and provides the servicemember a full-service commercial-quality move. All the issues discussed in our report will be addressed by this effort, which MTMC expects to complete by September 30, 1995. DOD’s comments also stated that the Office of the Secretary of Defense will direct the military services to ensure that all required claims data is provided to MTMC and will address the need for the services to emphasize claims recovery actions. DOD did not concur with our recommendation that the Congress consider shortening the statute of limitations for filing household goods claims for loss and damage to 1 year. DOD supported this proposal when it was originally recommended in our 1989 report. However, it now believes this statute should not be shortened (1) so as to maintain consistency with other claims statutes with a 2-year statute of limitations and (2) because it believes some servicemembers on long operational deployments or overseas assignments might have difficulty filing claims within a 1-year period, thus negatively impacting quality of life issues that DOD is working to enhance. Although a 1-year statute for filing household goods claims would create an inconsistency with the 2-year period allowed for other types of claims, we believe several unique factors affecting DOD household goods claims settlement warrant the exception. First, the period allowed for filing claims on DOD shipments is much longer than the 9-months maximum allowed for commercial shipments. Second, DOD currently requires servicemembers to report any damage to shipments within 75 days of delivery. We believe that servicemembers should reasonably be able to complete the process for filing a claim in the remaining 9-1/2 months of the 1 year statutory period. Third, we believe that since increased liability will increase carrier claims costs and thus affect the transportation rates bid by carriers, fairness dictates that claims resolution be performed as quickly as practical. Regarding claims filing difficulties caused by long deployments and overseas assignments, we believe that military regulations implementing the law permit DOD to provide relief in those rare instances when the servicemember cannot reasonably file a claim in a timely manner. As shown by table 4.1, about 85 percent of DOD claims are presently filed within 1 year of shipment delivery, and DOD officials generally acknowledged that most claims requiring more than 1 year to file involved servicemember procrastination. Both the AMC and the HHGFAA concurred that the statute of limitations for filing household goods claims should be shortened. However, the HHGFAA suggested shortening this statute to 9 months instead of 1 year so as to be consistent with industry practices for filing claims on commercial shipments. We believe a period of 1 year for filing claims is more reasonable, considering the operational deployments and overseas assignments cited in DOD’s comments. The HHGFAA stated that it disagreed with our proposal that performance bonds and cargo insurance for DOD household goods shipments be increased. It also said that performance bonds do not cover the payment of loss and damage claims, only those costs incurred by DOD for the onward movement of shipments stranded as the result of carrier bankruptcy. Our report did not specifically recommend that cargo insurance and performance bonding levels be increased. It did recommend that MTMC review carrier bonding and insurance requirements to enable the recovery of any losses caused by carrier bankruptcies. We believe MTMC should review both the types and levels of carrier bonding and insurance requirements because of the increased government risk associated with increased carrier liability, the business strategies of going out of business being employed by some carriers, and questions regarding the adequacy of carrier capitalization. We did not make more specific recommendations in this area because MTMC acknowledged these problems and now has actions underway designed to identify and implement the specific changes needed. | Pursuant to a congressional request, GAO reviewed changes proposed by the Military Traffic Management Command (MTMC) regarding carrier liability for loss and damage on Department of Defense (DOD) domestic shipments. GAO found that: (1) carrier performance has improved since DOD increased carrier liability on domestic household goods shipments; (2) although DOD claims costs declined by an estimated $18.9 million between fiscal years 1987 and 1991, only the Air Force achieved the expected level of cost recovery from carriers; (3) DOD needs to increase carrier liability on DOD international shipments so that DOD can recover the cost of damages and improve carrier performance; (4) industry officials believe that changes in carrier liability on international shipments could cause major industry disruptions unless carriers are compensated in exchange for the increased liability; and (5) MTMC does not have adequate claims information to assess individual carrier performance or the costs associated with increased carrier liability. |
A working capital fund relies on sales revenue rather than direct appropriations to finance its continuing operations. A working capital fund is intended to (1) generate sufficient revenue to cover the full costs of its operations and (2) operate on a break-even basis over time—that is, not make a profit nor incur a loss. Customers use appropriated funds, primarily Operations and Maintenance appropriations, to finance orders placed with the working capital fund. DOD estimates that in fiscal year 2001, the Defense Working Capital Fund—which consists of the Army, Navy, Air Force, Defense-wide, and Defense Commissary Agency working capital funds—will have revenue of about $74.3 billion. The Defense Working Capital Fund finances the operations of two fundamentally different types of support organizations: stock fund activities, which provide spare parts and other items to military units and other customers, and industrial activities, which provide depot maintenance, research and development, and other services to their customers. Because carryover is associated only with industrial operations, this report discusses the results of our review on Defense’s Working Capital Fund industrial operations. Carryover is the dollar value of work that has been ordered and funded (obligated) by customers but not yet completed by working capital fund activities at the end of the fiscal year. Carryover consists of both the unfinished portion of work started but not yet completed, as well as requested work that has not yet commenced. To manage carryover, DOD converts the dollar amount of carryover to months. This is done to put the magnitude of the carryover in proper perspective. For example, if an activity group performs $100 million of work in a year and had $100 million in carryover at year-end, it would have 12 months of carryover. However, if another activity group performs $400 million of work in a year and had $100 million in carryover at year-end, this group would have 3 months of carryover. The congressional defense committees and DOD have acknowledged that some carryover is necessary at fiscal year-end if working capital funds are to operate in an efficient and effective manner. For example, if customers do not receive new appropriations at the beginning of the fiscal year, carryover is necessary to ensure that the working capital fund activities have enough work to ensure a smooth transition between fiscal years. Too little carryover could result in some personnel not having work to perform at the beginning of the fiscal year. On the other hand, too much carryover could result in an activity group receiving funds from customers in one fiscal year but not performing that work until well into the next fiscal year or subsequent years. By minimizing the amount of carryover, DOD can use its resources in the most effective manner and minimize the “banking” of funds for work and programs to be performed in subsequent years. DOD has a 3-month carryover standard for all but one working capital fund activity group, but Office of the Under Secretary of Defense (Comptroller) and military service officials could not provide, and we could not identify, any analytical basis for this standard. We did not determine how much carryover individual activity groups would need in order to ensure a smooth flow of work at the end of the fiscal year. However, because the activity groups perform different types of work and have different business practices, the use of the same carryover standard for all activity groups is likely not appropriate. Military service officials and activity group managers also questioned the use of a uniform standard. For example, because the Army’s ordnance activity group is involved in the manufacture and assembly of munitions and weapon systems and requires a long lead time to obtain material, Army officials believe that group’s carryover standard should be more than 3 months. Similarly, much of the work that customers request from Navy research and development activities is actually accomplished by contractors. Consequently, Navy research and development activity group managers believe they should be able to subtract work that is to be accomplished by contractors from their reported carryover balances or, if they must include this work in their totals, to have a longer carryover period. A 1987 DOD carryover study also raised questions about the use of a uniform carryover standard. This study defined the optimum level of carryover as “the minimum amount of work needed in order to ensure that there is no interruption of the average work cycle.” As part of its 1987 carryover study, DOD asked the military departments to provide information on their working capital fund activity groups. Specifically, for each activity group they were to provide (1) information on the types of services provided and (2) data on the average time between commencement and completion of projects. Data developed for Army, Air Force, and DOD-wide activity groups showed that (1) the minimal carryover level varied significantly from one activity group to another and (2) in some instances the minimal carryover level was considerably less than 3 months. However, the study noted that its analysis did not consider either administrative or material lead times and acknowledged that both of these factors could have a significant impact on carryover requirements. When we discussed the 3-month carryover standard with officials of the Office of the Under Secretary of Defense (Comptroller), they acknowledged that they do not have an analytical basis for it. They informed us that the 3-month standard (1) was based on management judgment and that 3 months (one-fourth of the fiscal year) should be enough time to ensure a smooth flow of work during the transition from one fiscal year to the next, (2) had been in effect for many years, and (3) was reviewed during a 1996 DOD carryover study when DOD representatives visited various working capital fund activities to solicit the opinions of managers regarding the carryover standard and reviewed data substantiating those opinions. They also said that only in unusual situations should an activity group need more than 3 months of carryover. Finally, they questioned the benefit of performing an analysis for each activity group since it would require time and effort and would need to be updated periodically. However, without a sound analytical basis for carryover standards, we believe questions will continue to be raised about how much carryover is needed. The military services have not consistently implemented DOD’s guidance for determining whether an activity group has exceeded the 3-month carryover standard. One contributing factor for the inconsistency is that DOD’s guidance is vague concerning how certain items should be treated and/or calculated. Specifically, DOD’s guidance is not clear regarding what is to be included or not included in the contractual obligation and the revenue dollar amounts used in the formula for determining the number of carryover months. As a result, year-end carryover data provided to decisionmakers who review and use this data for budgeting—the Office of the Under Secretary of Defense (Comptroller) and congressional defense committees—are misleading and not comparable across the three services. For example, our analysis of the fiscal year 2001 budget estimates showed that policy changes that affected the use of certain adjustments to the calculations had (1) no impact on the Air Force’s reported year-end carryover because the Air Force did not make any adjustments, (2) reduced the Army’s reported year-end carryover by less than 1 month, and (3) reduced the Navy’s reported year-end carryover balance for some activity groups by 2 to 4 months. Further details on the methods used by the services to calculate carryover can be found in appendix II. Prior to 1996, if working capital fund activity groups’ budgets projected more than a 3-month level of carryover, their customers’ budgets could be, and sometimes were, reduced by the Office of the Secretary of Defense and/or congressional defense committees. However, in 1996, the Under Secretary of Defense (Comptroller) directed a joint Defense review of carryover because the military services had expressed concerns about (1) the methodology used to compute months of carryover and (2) the reductions that were being made to customer budgets to help ensure that activity groups did not exceed the 3-month carryover standard. Based on the work of the joint study group, DOD decided to retain the 3-month carryover standard for all working capital fund activity groups except Air Force contract depot maintenance. For Air Force contract depot maintenance, it set a 4.5-month carryover standard because of the additional administrative functions associated with awarding contracts. Furthermore, based on the joint study group’s work and concerns expressed by the Navy, DOD also approved several policy changes that had the effect of increasing the carryover standard for all working capital fund activities. Specifically, under the policy implemented after the 1996 study, certain categories of orders, such as those from non-DOD customers, and contractual obligations, such as Army arsenals’ contracts with private sector firms for the fabrication of tool kits, can be excluded from the carryover balance that is used to determine whether the carryover standard has been exceeded. These policy changes were documented in an August 2, 1996, DOD decision paper that provided the following formula for calculating the number of months of carryover (see figure 1). The impact of DOD’s 1996 decision to exclude contract obligations and certain categories of orders from reported carryover varied significantly among the services. For example, our analysis of the military services’ fiscal year 2001 budget estimates showed that this change (1) had no effect on the Air Force depot maintenance activity group’s reported year-end carryover balance because the Air Force did not make any adjustments, (2) resulted in a $70.1 million reduction in the Army depot maintenance and ordnance activity groups’ reported year-end carryover, and (3) as illustrated in table 1, allowed the Navy to reduce its depot maintenance and research and development activity groups’ reported year-end carryover by about $1.9 billion. Our work showed that these differences were due primarily to the fact that the military services have treated contract obligations differently when calculating carryover. This problem, in turn, is due to the fact that DOD has not provided clear guidance on whether (1) the revenue used in the carryover formula should be reduced when adjustments are made for contract obligations and (2) material requisitions submitted to DOD supply activities should be considered contract obligations. Because the Army and Navy are reducing the amount of carryover but not the amount of revenue, the number of months of carryover they are reporting is understated. We found differences in the way the military services make adjustments for contractual services. DOD’s formula for calculating months of carryover is based on the ratio of adjusted orders carried over to revenue. The formula specifies that carryover should be reduced by the amount of contractual obligations. However, the policy does not address whether downward adjustments for the revenue associated with these contractual services should also be made. Unless this is done, the number of months will be understated. The Army and Navy reduced their carryover balances by the amount of contractual obligations, but they did not reduce the revenue associated with these contractual services. On the other hand, the Air Force depot maintenance activity group in effect did reduce the revenue associated with contractual obligations because (1) it segregates its contract operations’ carryover and revenue from its in-house operations’ carryover and revenue and (2) DOD has established separate carryover standards for the Air Force in-house and contract depot maintenance operations. The Air Force depot maintenance activity group’s approach ensures that data on in-house operations is not distorted by data on contract operations. On the other hand, the Army and Navy’s approach allows activity groups to reduce their reported months of carryover by simply increasing the amount of work contracted out. Our work showed that the months of carryover reported by the Army and Navy activity groups would more accurately reflect the actual backlog of DOD in-house work if adjustments for contractual obligations affected both contract carryover and contract revenue. In discussing this matter with officials from the Office of the Under Secretary of Defense (Comptroller), they stated that we had a valid point and indicated that DOD would need to review its carryover policy to determine whether it needs to be revised. Similarly, we found that differences in the way the military services treat outstanding material requisitions has a significant effect on the dollar value of carryover that is reported. Specifically, our analysis showed that Navy activity groups and some Army activities consider material requisitions to be contract obligations and that they, therefore, subtract the dollar value of outstanding requisitions from their carryover balances. However, the Air Force depot maintenance activity group, which had about $448 million of material on requisition as of September 30, 2000, did not make any such adjustments. Office of the Under Secretary of Defense (Comptroller) officials informed us that outstanding material requisitions were not intended to be included as contractual obligations for carryover purposes. In fact, they told us that when the policy to allow carryover to be adjusted for contract obligations was established in 1996, the intent was that only contracts with private industry would be included as contract obligations when calculating the number of months of carryover. The inconsistencies in the military services’ implementation of DOD’s 1996 guidance affected the actions that congressional decisionmakers took on fiscal year 2001 budget estimates. For example, the Air Force’s fiscal year 2001 budget showed that the unadjusted months of year-end carryover for in-house depot maintenance operations was 3.3 months. Because the 3-month carryover standard was exceeded, the Congress reduced the Air Force’s Operation and Maintenance appropriation by $52.2 million. However, our analysis showed that the Air Force’s estimate would have been less than DOD’s 3-month standard if it had subtracted the dollar value of outstanding material requisitions from its carryover estimates—as the Navy does. Because the Navy adjusted its year-end carryover estimates for both contract obligations and certain types of orders, its reported year- end carryover balances were less than the 3-month standard. As a result no action was taken on the Navy’s budget. DOD policy requires each individual working capital fund activity to record as carryover any unfilled work orders the activity has accepted. Some of these orders are received from other working capital fund activities. For example, a Navy working capital fund activity (activity 1) may perform part of the work a customer has ordered and “subcontract” part of the work out to another working capital fund activity (activity 2). In this situation, both activities—the activity originally accepting the customer order (activity 1) and the activity receiving part of the work to be performed (activity 2)—record the unfilled order as carryover. In order to eliminate any double counting of carryover, DOD’s policy allows an activity, as shown in figure 1, to adjust or reduce its carryover for orders received from other working capital fund activities (inter/intra fund orders). However, Navy working capital fund activities and some Army activities categorized orders they sent to other working capital fund activities as contract obligations and used these obligations to reduce reported year-end carryover. As a result, not only did the Navy and Army eliminate the double counting of such orders, they eliminated all these orders from its calculation to determine the number of months of carryover and, thereby, did not follow DOD guidance on calculating carryover for inter/intra fund orders. Further complicating the congressional budget review of carryover is that some activity groups have underestimated their budgeted year-end carryover year after year, thereby providing decisionmakers misleading carryover information and resulting in more funding being provided than was intended. As previously discussed, the 3-month standard has never been validated and the services do not use the same method for calculating carryover. Therefore, the number of months of budgeted and actual carryover that the services have reported are not comparable. Nevertheless, each year, the services’ budget submissions include information on budgeted and actual year-end carryover for each activity group. Decisionmakers in the service headquarters, Office of the Under Secretary of Defense (Comptroller), and congressional defense committees use this information to determine whether the activity groups have too much carryover. If the groups do, the decisionmakers may reduce the customer budgets that finance new orders. Actual reported year-end carryover levels for the Army and Air Force depot maintenance activity groups and the Army ordnance activity group exceeded DOD’s carryover standard many times during fiscal years 1996 through 2000. Further, our analysis showed that in many of these instances, the budget estimate for year-end carryover was less than the DOD standard. If carryover estimates for the Army’s activity groups and the Air Force’s contract depot maintenance operations had been more accurate, the service headquarters, the Office of the Under Secretary of Defense (Comptroller), and/or the congressional defense committees might have taken action to reduce customer funding for new orders as has been done in the past. Table 2 shows that the actual reported year-end carryover for Army’s depot maintenance and ordnance activity groups exceeded the 3-month carryover standard consistently from fiscal year 1996 through fiscal year 2000. Table 2 also shows that the Army’s budget consistently underestimated the amount of actual year-end carryover for each year from fiscal year 1998 through fiscal year 2000 for the two activity groups. Since the Army’s budgeted year-end carryover exceeded the 3-month standard for fiscal year 2001, the Department of Defense Appropriations Act, 2001, reduced the Army’s fiscal year 2001 Operation and Maintenance appropriation by $40.5 million. Concerning the Army depot maintenance activity group, Army officials provided us several reasons to explain why the reported actual year-end carryover exceeded the 3-month carryover standard and budget projections. For fiscal year 1998, Army officials could not explain why the actual fiscal year-end carryover for the depot maintenance activity group was above the 3-month standard and budget projection. They stated that the detailed data needed to determine the reasons had not been retained. For fiscal year 1999, Army officials stated that the depot maintenance activity group (1) received an inordinate number of new orders at year-end and (2) was unable to adjust its production schedules to mitigate the effect of the late receipt of new orders. For fiscal year 2000, Army officials stated that there were four reasons that the actual reported year-end carryover balance exceeded the standard and budget projection. Some depots could not obtain the parts needed in a timely manner, so that less work was performed than planned. Some depots did not accurately estimate the time and resources needed to complete jobs. Emergency situations, such as unplanned orders to perform safety-of- flight work, delayed work on orders already accepted by the depots. The composition and size of the workload changed from the budget projections due to changes in customer funding and priorities. Concerning the Army ordnance activity group which also exceeded the 3-month carryover standard, Army officials informed us that the group’s primary focus is on manufacturing and that the 3-month standard should not apply. They stated that a longer carryover time frame is needed to accommodate the longer time needed for the manufacturing process and the long lead-time involved in buying certain types of material. Table 3 shows that several times since fiscal year 1996 the Air Force’s actual reported carryover for (1) in-house depot maintenance operations exceeded the 3-month standard and (2) contract depot maintenance operations exceeded the 4.5-month standard. Table 3 also shows that the Air Force’s budget for contract depot maintenance underestimated the amount of actual year-end carryover for fiscal years 1997, 1999, and 2000. As stated previously, because the budgeted year-end carryover exceeded the carryover standard for fiscal year 2001, the Department of Defense Appropriations Act, 2001, reduced the Air Force fiscal year 2001 Operation and Maintenance appropriation by $52.2 million. Air Force officials informed us that developing accurate carryover budgets and executing those budgets during the late 1990s was difficult because the depot maintenance activity group underwent significant downsizing. Specifically, the activity group (1) reduced maintenance personnel by more than one-third as it closed three repair centers and (2) realigned 40 percent of its in-house workload. In developing budgets for those years, the activity group’s productivity estimates were optimistic resulting in the activity group accomplishing less work than budgeted, and, therefore, was unable to stay within the carryover standard. In addition to the productivity problem, the activity group could not always obtain the material it needed in a timely manner. As a result, it could not complete work as scheduled and the amount of carryover increased. In developing its fiscal year 2002 budget request, the Air Force determined that the initial year-end carryover budget estimate for its contract depot maintenance operations exceeded the 4.5-month carryover standard by $92.5 million. To help ensure that the actual carryover would not be over the 4.5-month standard at the end of fiscal year 2002, Air Force officials reduced the activity group’s customers’ budget request by $92.5 million. Thus, in theory, customers should order less work from the activity group in fiscal year 2002, resulting in less carryover than initially budgeted. Our analysis showed that customer order levels would have been about $2.9 billion less than the amount budgeted if a 30-day carryover policy had been in effect during the fiscal year 2001 budget review process. Further, as previously discussed, the amount of carryover needed to ensure a smooth flow of work during the transition from one fiscal year to the next varies significantly from one activity group to the next. Military service officials and working capital fund managers stated that a 30-day carryover policy would have a potentially adverse effect on the operations of most working capital fund activities. However, because (1) DOD has not performed the analysis necessary to validate its existing 3-month carryover standard and (2) the actual impact would depend on a number of unknown factors—such as the amount and type of work requested by customers and the timing of the requests—it is difficult, if not impossible, to predict the operational impact of reducing the carryover standard. If DOD were to reduce its carryover standard to less than 3 months, a corresponding reduction would occur in both the amount of carryover allowed and the level of customer orders accepted. As noted in the previous paragraph, our analysis showed that customer order levels would have been about $2.9 billion less than the amount actually budgeted if a 30- day carryover policy had been in effect during the fiscal year 2001 budget review process. If the standard had been reduced to 60 days or 75 days, projected customer order levels would have been about $1.6 billion or $1.0 billion less, respectively, than the amount budgeted. The amount of carryover exceeding 90 days was about $700 million. Although they have no analytical data to support their views, working capital fund managers at the headquarters level believe a 30-day carryover policy would have the potential of significantly impairing their operations. Working capital fund officials at the activities we visited indicated that a 30-day policy would (1) restrict their ability to accept orders during the fourth quarter of the fiscal year as they act to ensure that actual carryover levels do not exceed the 30-day standard, (2) complicate the tasks of planning and scheduling work, and (3) create “pockets of inefficiency” where direct-labor employees are without work and must, therefore, charge their time to overhead. They also indicated that these problems, in turn, would adversely affect their ability to provide timely support to their customers, increase the unit cost of the work that is accomplished, and cause operating losses. Our work showed that, because the amount of carryover needed to ensure a smooth flow of work varies significantly from one activity group to the next, the effect of a 30-day carryover standard on a group’s efficiency and effectiveness would likewise vary significantly. For example, in its August 1996 decision paper, which addresses the carryover standard, DOD points out that the Air Force’s contract depot maintenance operations could not operate with a 30-day standard because the average administrative time associated with awarding a contract is more than 30 days. Conversely, Navy records indicate that the Naval Research Laboratory’s actual reported carryover during fiscal years 1996 through 2000 averaged about 0.9 months, and laboratory officials indicated that these low carryover levels have not had an adverse impact on their operations. Finally, our work indicates that the impact of a 30-day policy depends largely on what action DOD ultimately takes to ensure consistent carryover reporting. For example, at the end of fiscal year 2000, the Air Force depot maintenance activity group reported actual year-end carryover levels of 4.8 months for contract operations and 2.8 months for its in-house operations. However, if it had used the Navy’s carryover reporting policies and procedures, the activity group would have reported an overall carryover level of about 1.6 months. Conversely, although Navy activity groups frequently reported actual year-end carryover balances of less than 2 months during fiscal years 1996 through 2000, their managers indicated that even a 3-month standard would not be enough if they implemented DOD’s carryover formula in the same manner as the Air Force. Decisionmakers do not have the information they need to make informed decisions on fiscal year-end carryover balances because (1) there is no analytical basis for the 3-month carryover standard, (2) the services use different methods to calculate the carryover balances, and (3) some activity groups consistently underestimate their budgeted carryover when developing their budgets. Until these weaknesses are resolved, concerns will continue to be raised about whether an activity group has too much or not enough carryover. These concerns will affect not only the working capital fund activity groups’ operations but also customer operations because they finance the orders placed with the working capital fund activities. We recommend that the Secretary of Defense direct the Under Secretary of Defense (Comptroller) to determine the appropriate carryover standard for the depot maintenance, ordnance, and research and development activity groups because these groups account for about 90 percent of the dollar amount of carryover. The carryover standard should be based on the type of work performed by the activity group and its business practices, such as whether it performs the work in- house or contracts it out. As part of this effort, DOD needs to have a sound analytical basis for determining the appropriate level of carryover. direct the Under Secretary of Defense (Comptroller) to clarify the carryover policy to obtain consistency in calculating the amount of carryover for use in determining whether the activity groups have exceeded the carryover standard. Specifically, in calculating the number of months of carryover, the policy needs to clarify (1) the type of obligations to be included in the contractual obligation category, such as contracts with private industry and outstanding material requisitions, and (2) that the revenue used must be adjusted for certain purposes, such as revenue earned for work performed by contractors. All internal and external reporting of carryover should be done using the same methodology. direct the Under Secretary of Defense (Comptroller) to ensure that the military services calculate carryover consistently during the budget review process so that the carryover figures are comparable. direct the Under Secretary of Defense (Comptroller) and the Acting Secretaries of the military services to enforce the current policy that specifies that one activity should report carryover on interfund and intrafund orders. direct the Acting Secretaries of the military services to use more realistic carryover figures in developing their budgets by considering historical actual carryover data. In its comments on a draft of this report, DOD agreed with our five recommendations and stated that it will take actions in the near future to clarify the policies and formula to properly ascertain a uniform approach in examining the backlog of funded work in its Financial Management Regulations. In addition, DOD said it will revalidate the appropriate carryover standards that should be applied to the depot maintenance, ordnance, and research and development activity groups. We are sending copies of this report to the Honorable Donald H. Rumsfeld, Secretary of Defense, and the Acting Secretaries of the Army, Navy, and Air Force. We will also make copies available to others upon request. Please contact Greg Pugnetti at (703) 695-6922 if you or your staff have any questions concerning this report. GAO contact and staff acknowledgments to this report are listed in appendix IV. To determine the reasons and the basis for DOD’s 3-month carryover policy, we met and discussed the policy with officials from the Office of the Under Secretary of Defense (Comptroller), Army, Navy, and Air Force. We also requested and reviewed documentation and/or analysis that supported the rationale for the 3-month carryover standard. In addition, we obtained and analyzed DOD studies, including the 1996 carryover study and budget documents that discussed DOD’s carryover policy and the need for a 3-month period. We did not determine how much carryover individual activity groups would need in order to ensure a smooth flow of work at the end of the fiscal year. To determine if the services were calculating carryover in a consistent manner and, if not, the reasons for any differences, we obtained and analyzed the services’ calculations for the (1) fiscal year 1996 through fiscal year 2000 reported year-end actual carryover balances and (2) fiscal year 1996 through fiscal year 2001 budgeted year-end carryover balances. We met with officials from the Army, Navy, and Air Force to discuss the methodology they used to calculate carryover. We obtained (1) explanations about why the services made adjustments in calculating the dollar amount of carryover balances as well as the number of months of carryover and (2) determined the impact of those adjustments on the carryover figures. To determine if the military services’ budgeted and reported actual carryover amounts exceeded the 3-month standard at fiscal year-end, we obtained and analyzed (1) budgeted year-end carryover data for fiscal year 1996 through fiscal year 2001 and (2) reported actual year-end carryover data for fiscal year 1996 through fiscal year 2000. When the budgeted and/or actual carryover data exceeded the 3-month standard, we met with responsible budgeting and/or accounting officials to ascertain why. To determine whether applying the carryover authority to not more than a 30-day quantity of work would be sufficient to ensure uninterrupted operations at the working capital fund activities early in a fiscal year and what the impact on these activities would be if the carryover policy were reduced from 3 months to 30 days, we calculated what the potential financial impact on customer orders would have been if a 30, 60, 75, or 90- day carryover standard had been in effect for fiscal year 2001. We also met with (1) headquarters officials from the Office of the Under Secretary of Defense (Comptroller), Army, Navy, and Air Force and (2) Army, Navy, and Air Force officials at individual working capital fund activity groups and activities to obtain their views on what the impact on their operation would be if the carryover policy were reduced from 3 months to 30 days. However, because (1) DOD has not performed the analysis necessary to validate its existing 3-month carryover standard and (2) the actual impact would depend on a number of unknown factors, such as the amount and type of work requested by customers and the timing of the requests, it is difficult, if not impossible, to predict the operational impact of reducing the carryover levels. In performing our work, we obtained carryover information on the following Defense Working Capital Fund activity groups: (1) Air Force depot maintenance (in-house and contract), (2) Army depot maintenance, (3) Army ordnance, (4) Naval aviation depots, (5) Naval shipyards, and (6) Naval research and development. The Naval research and development activity group consists of the following five subgroups: Naval Air Warfare Center, Naval Surface Warfare Center, Naval Undersea Warfare Center, Naval Research Laboratory, and the Space and Naval Warfare Systems Center. We performed our review at the following locations. Office of the Under Secretary of Defense (Comptroller), Washington, D.C. Army Headquarters, Washington, D.C. Army Materiel Command, Alexandria, Virginia Army Communications-Electronics Command, Fort Monmouth, New Corpus Christi Army Depot, Corpus Christi, Texas Tobyhanna Army Depot, Tobyhanna, Pennsylvania The reported actual year-end carryover information used in this report was produced from DOD’s systems, which have long been reported to generate unreliable data. We did not independently verify this information. The Defense Inspector General has cited system deficiencies and internal control weaknesses as major obstacles to the presentation of financial statements that would fairly present the Defense Working Capital Fund financial position for fiscal years 1993 through 2000. Our review was performed from September 2000 through April 2001 in accordance with U.S. generally accepted government auditing standards. However, we did not validate the accuracy of the accounting and budget information, all of which was provided by the Army, Navy, and Air Force. We requested comments on a draft on this report from the Secretary of Defense or his designee. We have reprinted the comments in appendix III of this report. DOD’s carryover guidance does not address how certain items should be treated and/or calculated and, as a result, it is a contributing factor to the military services’ inconsistent implementation of DOD’s formula for determining the number of months of carryover. This appendix discusses the different methods the services used to determine compliance with DOD’s 3-month carryover standard. Prior to the fiscal year 2002 budget, the Air Force did not make any adjustments to its figures when determining the number of months of carryover and whether the Air Force had exceeded the 3-month standard. An Air Force official said they did not implement the 1996 carryover guidance sooner because the deductions would have had little or no impact on the number of months of carryover. Beginning with the fiscal year 2002 budget, the Air Force official informed us that they were making the adjustments so that the Air Force would be in compliance with DOD’s 1996 carryover policy. In making the adjustments for the fiscal year 2002 budget, the Air Force reduced its year-end carryover figure by the amount associated with certain types of orders, such as orders from foreign countries and non- DOD sources. However, unlike the Navy and Army, as discussed below, the Air Force (1) did not make adjustments for contractual obligations such as outstanding requisitions for material and (2) reduced the revenue figure used in the calculation by the amount of revenue related to those certain types of orders excluded from the carryover figure. An Air Force official told us that they adjusted the revenue figure so that the Air Force would be consistent in making the adjustments. That is, they reduced both the numerator (the carryover figure) and denominator (the revenue figure) part of the equation. The Navy has been making the allowable adjustments to its year-end carryover figures since 1996. The Navy has been reducing orders carried over into the next fiscal year for (1) carryover associated with certain types of orders, such as orders from foreign countries and non-DOD sources and (2) any contractual obligations incurred against those orders, which includes contracts with private industry, outstanding material requisitions with DOD supply activities, and orders placed with other working capital fund activities. However, unlike the Air Force, the Navy did not reduce or make any adjustments to the revenue figure used in the calculation. Because it did not adjust the revenue figure, the Navy’s method resulted in a lower monthly carryover figure than did the method used by the Air Force. Navy officials informed us that they used total revenue in their calculation because total revenue represented the full operating capability of a given activity to accomplish a full year’s level of workload. Further, the Navy’s reason for not removing contract-related revenue from the denominator of the calculation was that the numerator of the calculation included carryover (funds) related to work for which contracts would eventually be awarded but which had not yet been awarded at fiscal year-end. The Army has also been making the allowable adjustments to its carryover figures since 1996. That is, the Army has been reducing orders carried over into the next fiscal year for (1) carryover associated with certain types of orders, such as orders from foreign countries and non-DOD sources and (2) any contractual obligations incurred against those orders, which include contracts with private industry, outstanding material requisitions with DOD supply activities, and orders placed with other working capital fund activities. Like the Navy, the Army also did not reduce or make any adjustments to the revenue figure used in the calculation. Army officials told us that they did not adjust the revenue figure because (1) DOD’s guidance states that current year revenue should be used when calculating months of carryover and (2) doing so reflects the rate of actual workload execution for the entire year. However, in discussing this issue with Army headquarters and depot officials, they stated that it did not make much sense to adjust the carryover figure in the formula (numerator) for contractual obligations and other orders and not make a corresponding adjustment to the revenue figure in the formula (denominator) for the related revenue. Further, Army working capital fund activities where we performed work did not all calculate carryover the same way. For example, at least one Army activity did not use contractual obligations when calculating the number of months of carryover, even though the activity had such obligations. In addition, another Army activity did not use contractual obligations when computing the months of carryover until recently when it calculated its actual carryover for fiscal year 2000. In addition, Karl Gustafson, William Hill, Ron Tobias, and Eddie Uyekawa made key contributions to this report. | This report examines the working capital fund activities for the Department of Defense (DOD). GAO (1) identifies potential changes in current management processes or policies that, if made, would result in a more efficient operation and (2) evaluates various aspects of the DOD policy that allow Defense Working Capital Fund activities to carry over a 3-month level of work from one fiscal year to the next. GAO found that DOD lacks a sound analytical basis for its current 3-month carryover standard. DOD established a 3-month carryover standard for most working capital fund activity groups, although it has not done the analysis necessary to support the 3-month standard. Without a validation process, neither DOD nor congressional decisionmakers can be sure that the 3-month standard is providing activity groups with reasonable amounts of carryover to ensure a smooth transition from one fiscal year to the next or whether the carryover is excessive. In addition, carryover information currently reported under the 3-month standard is not comparable between services and is misleading to DOD and congressional decisionmakers. Specifically, results can differ markedly because the military services use different methods to calculate the number of months of carryover. Further complicating the congressional budget review of carryovers is that some activity groups have underestimated their budgeted carryover year after year, thereby providing decisionmakers with misleading year-end carryover information and resulting in more funding being provided than was intended. GAO also reviewed the potential financial impact of reducing the amount of fiscal year-end carryover permitted by DOD policy. GAO's analysis showed that if a 30-day, 60-day or 75-day carryover policy had been in effect during the fiscal year 2001 budget review process, the amount of budgeted customer orders could have been reduced by about $2.9 billion, $1.6 billion, or $1.0 billion, respectively. |
Identification, investigation, and cleanup of hazardous substances under DOD’s FUDS program are authorized by the Defense Environmental Restoration Program (DERP). Such actions must be carried out consistent with the Comprehensive Environmental Response, Compensation, and Liability Act of 1980 (CERCLA) as amended by the Superfund Amendments and Reauthorization Act of 1986 (SARA), which established DERP. The goals of the program also include the correction of environmental damage. To fund the program, SARA set up the Defense Environmental Restoration Account. DOD has established specific goals for the cleanup of properties, including FUDS, that have hazardous, toxic, and radioactive wastes in the soil and water. These goals include having an approved cleanup process in place or cleanup complete at 100 percent of all such properties by the end of fiscal year 2014. DOD has not yet set any goals for projects involving hazardous, toxic, and radioactive waste in containers, unexploded ordnance, other explosive wastes, or unsafe building demolition. Total spending for the FUDS cleanup program since fiscal 1984 is $2.6 billion. During the most recent past five fiscal years (1997-2001), annual program funding for FUDS cleanup averaged about $238 million, with program funding in fiscal year 2001 of $231 million. The Corps’ estimate of the additional cost to complete cleanup of the 4,467 currently identified projects is about $13 billion, not including program management or support costs or inflation beyond fiscal year 2007. Also omitted from the estimated cost is a revised cost projection for the cleanup of unexploded ordnance, which resulted from a recent survey of DOD training ranges. According to Corps officials, the revised cost projection for ordnance cleanup would add another $5 billion or more, depending on the level of cleanup selected, to the estimated cost to complete all FUDS projects. By the time all projects are completed, the Corps estimates that it will spend at least $15 billion to $20 billion cleaning up FUDS properties. At the current funding level, the Corps does not expect to meet the established goal of cleaning up FUDS properties with hazardous, toxic, and radioactive waste by fiscal year 2014, even if work could be deferred on all other projects, such as containerized wastes, unexploded ordnance, and building demolition, for which no goals have been established. In deciding which actions, if any, need to be taken at a potential FUDS property, the Corps generally follows the process established for cleanup actions under CERCLA. The process usually includes the following phases: Preliminary assessment of eligibility—The Corps determines if the property is eligible for the FUDS cleanup program based on whether there are records showing that DOD formerly owned, leased, possessed, or operated the property or facility. The Corps also identifies any potential hazard on the property related to DOD activities. The results of this assessment are detailed in an Inventory Project Report. If the property is eligible but there is no evidence of hazards, the property is categorized as requiring “no further action.” Site inspection—The Corps inspects the site to confirm the presence, extent, and source(s) of hazards. Remedial investigation and feasibility study—The Corps evaluates the risk associated with the hazard; determines whether cleanup is needed; and, if so, selects alternative cleanup approaches. Remedial action—The Corps designs the remedy, performs the cleanup, and conducts long-term monitoring if necessary. When all of these steps have been completed for a given project, or if no cleanup is needed, the Corps considers the project to be “response complete.” After all projects at a property are designated as response complete, the property can then be closed out. Property closeout may require concurrence by federal or state regulators depending on the type of hazard involved. A flow chart showing the decision process in the preliminary assessment of eligibility phase is shown in figure 1. Upon completion of the preliminary assessment of eligibility phase, a property enters the site inspection phase. The site inspection phase involves a more detailed examination of the property and related records to confirm that a hazard exists and that a cleanup project is required to remove or reduce the hazard to a safe level. After the site inspection phase, the Corps conducts a remedial investigation to assess the risk posed by the hazard and determine if a cleanup is necessary. A feasibility study is then performed to select a cleanup approach. The Corps develops more detailed plans for constructing and carrying out the selected cleanup approach during the remedial design phase. A project next moves into the remedial action phase. The remedial action phase can involve several steps including constructing or installing the selected cleanup approach, operating the approach, and long-term monitoring, if necessary. A flow chart for the site inspection through long-term monitoring process is shown as figure 2. Corps review of potential FUDS properties found that many properties are ineligible because they are still part of an active DOD installation or there are no records available showing that DOD ever owned or controlled the property. Many of the eligible properties did not require cleanup under the FUDS program because the Corps determined that no DOD-related hazards existed. As of October 1, 2000, there were 9,171 properties that had been identified by the Corps, the states, or other parties as potentially eligible for cleanup under the FUDS program. Of these properties, 9,055 had received a preliminary assessment of eligibility, 42 were still being assessed, and 74 properties had not been assessed yet. Based on preliminary assessments, the Corps determined that 6,746 properties were eligible and that 2,309 of the properties—more than a quarter of those assessed—were ineligible. In most cases, properties were ineligible either because the properties were still under DOD control (915) or because there were no records found showing that DOD had ever controlled the property (787). Table 1 shows the reasons that properties were found to be ineligible. Although the Corps initially found that 6,746 properties were eligible for cleanup, the Corps subsequently determined, on the basis of site inspections, that most of these properties do not require cleanup after all. Specifically, the Corps determined that 4,070 properties either do not have any hazards requiring DOD cleanup or else have hazards that do not meet the level requiring cleanup. Hazards requiring cleanup were found on 2,676 of the eligible properties. Figure 3 shows the breakout of properties by eligibility and those where hazards were found. The Corps identified 4,467 distinct projects requiring cleanup at the 2,676 properties that were identified as having hazards needing cleanup. At 25 of these properties, no specific projects have been identified as yet. However, after further investigation the Corps determined that projects identified at 405 properties were ineligible because other outside parties were responsible for contaminating the properties after DOD relinquished control. At another 33 properties, the identified projects were not recommended for further action or were not approved. The reasons for not recommending a project for further action or not approving a project varied. For example, the current landowner might have refused access to the property or might have already addressed the problem. The remaining 2,213 eligible properties had 3,736 projects requiring investigation and cleanup. Of these projects, 284 were not yet scheduled for action, 1,844 projects were under way or planned, and 1,608 were completed. Figure 4 depicts the status of FUDS projects with hazards that required cleanup actions. DOD reports on the status of its various environmental cleanup programs in an annual report to the Congress. However, as of the date of this report, DOD had not yet released its report for fiscal year 2000—the most recently completed fiscal year. According to the Corps’ FUDS database, there were 2,382 completed FUDS projects as of the end of fiscal year 2000, or about 53 percent of the nearly 4,500 FUDS projects that required cleanup. The completed projects figure includes those removed from the active inventory either as a result of a study or an administrative action or as the result of an actual cleanup action such as removing toxic wastes or treating contaminated groundwater. In fact, our analysis showed that over 57 percent of the projects reported as complete did not require any actual cleanup and were reported as complete on the basis of a study or an administrative decision. For example, 183 of the 205 unexploded ordnance projects reported as complete were closed based on a study, while only 22 required an actual cleanup phase. Further, the completed figure includes 774 projects that were ineligible for cleanup as part of the FUDS program. The Corps initially thought that these projects were eligible but later determined that they were ineligible because the contamination was caused by other parties after DOD relinquished control of the properties. The Corps made an administrative decision to classify these projects as “response complete” to remove them from its tracking system. If only the number of projects actually believed to require cleanup—3,148—was used as the basis for calculating cleanup progress, then only 1,020 projects or about 32 percent of those requiring cleanup have actually been cleaned up. Further, according to Corps officials, most of the projects cleaned up to date were the least complex and least expensive ones, such as removing underground storage tanks (668 completed projects) or demolishing buildings (198 completed projects). On the other hand, many of the remaining cleanup projects are high cost and technologically difficult. Consequently, cleanup of the approximately 2,100 remaining projects will require at least $13 billion (revised estimates may raise this to $18 billion or more) and take more than 70 years to complete based on current planned funding of about $200 million per year. According to Corps officials, reporting of completed FUDS projects follows DOD’s reporting policies for all its environmental cleanup areas such as base closures and active installations. The more than 9,000 properties identified as potential candidates for cleanup as FUDS are distributed across every state, the District of Columbia, and six U.S. territories and possessions. However, there are large concentrations of potential FUDS properties in certain states. For example, 10 states account for almost 52 percent of all the properties, while 27 states have more than 100 properties each and represent over 81 percent of all the properties. Figure 5 shows the geographic distribution of potential FUDS properties. Unexploded ordnance and other explosive wastes were believed to contaminate over 1,600 FUDS properties, of which 753 were associated with former training ranges according to a recent DOD survey. Our review of the over 800 properties not designated as training ranges in DOD’s survey results showed that there may be 200 or more additional properties with training ranges that should be included in DOD’s range survey results. As discussed previously, most of the 9,171 potential FUDS are either ineligible for the cleanup program (2,309 properties) or do not require any environmental cleanup (4,070 properties) according to assessments made by the Corps; 116 properties were still being reviewed for eligibility and potential hazards. The remaining 2,676 properties were found to have sufficiently high levels of hazards to require cleanup. Of these, 463 properties were excluded because other parties were deemed responsible for the hazard (405 properties), or because no specific project had been identified as yet (25 properties), or because no projects had been identified or approved for further action (33 properties). Table 2 summarizes the eligibility status of the potential FUDS by geographic location. For the remaining 2,213 properties, a total of 3,736 projects were identified and approved for further action. The status of these projects varies from those that were only recently identified and have had no cleanup action taken as yet to those that are completed. Information on individual properties, by state, including the property name, location, congressional district, eligibility, existence of hazards, number of eligible projects, estimated costs incurred to date, and estimated cost to complete cleanup is contained in appendix I. Information on individual projects, by state, including the property name, location, congressional district, project number, type(s) of hazard, risk level, status of cleanup, cleanup remedy used, costs incurred to date, and estimated cost to complete cleanup is contained in appendix II. These appendixes are available only on the Internet at http://www.gao.gov/GAO- 01-1012SP/. In response to the Senate Armed Service’s Committee direction to develop more complete information on the estimated cost to conduct environmental cleanup at training ranges, DOD conducted a survey of training ranges at its active, closing, and closed facilities to determine which ones might contain unexploded ordnance. Because DOD does not have a complete inventory of its training ranges, the amount of funding necessary to clean up training ranges has been unreliable and is believed to be significantly understated. DOD’s survey results indicated that 753 FUDS properties that might contain unexploded ordnance should be classified as training ranges. For a variety of reasons, over 800 FUDS properties were not included in DOD’s survey. Many of these properties were excluded because the Corps had previously decided that, although there might be unexploded ordnance or other explosive wastes present, no further action was needed to address the hazards at these properties. We reviewed basic information about these properties, such as the name of the property and the project description, to see if there could be additional ranges not reported as part of DOD’s survey. For example, if a project with ordnance or explosive wastes was located at property that was named “Bombing Range” or “Bombing Target” or was described as an ordnance or explosive wastes cleanup project at a bombing range or bombing target, we concluded that these properties were likely training ranges. We found over 200 properties that could be ranges based on such criteria. DOD’s annual report on the status of its environmental restoration activities can provide a misleading picture of FUDS program accomplishments. In its annual report, DOD accounts of completed projects include projects that were determined to be ineligible or that did not involve any actual cleanup effort, as well as projects that required actual cleanup actions to complete. As a result, it appears that after 15 years and expenditures of $2.6 billion, over 50 percent of the FUDS projects have been completed. In reality, only about 32 percent of those projects that required actual cleanup actions have been completed, and those are the cheapest and least technologically challenging. The Corps estimates that the remaining projects will cost over $13 billion and take more than 70 years to complete. The Corps’ reporting of completed FUDS projects reflects DOD’s reporting policies for all of its environmental cleanup programs, including those at closing bases and active installations. As such, progress on those cleanup programs may not be accurately pictured either. In addition, DOD’s range survey did not include all FUDS properties that may contain unexploded ordnance and could be former training ranges. Consequently, DOD’s inventory of FUDS training ranges is likely incomplete, and its estimated cost to clean up these ranges is likely understated. The Secretary of Defense should clarify DOD’s reporting of the cleanup progress at FUDS and for other DOD cleanup activities by excluding projects from its “completed” list that were closed solely as a result of a study or administrative action and did not require actual cleanup. Such projects should instead be reported as eligible properties where a hazard either was not found or did not require cleanup because it was below the threshold level or because it resulted from another party’s actions. Similarly, DOD’s annual report should exclude projects from its “completed” list that were determined to be ineligible for cleanup under the FUDS program. To improve the accuracy of DOD’s FUDS training range survey results and its estimate of the costs related to environmental cleanup at these ranges, the Secretary of Defense should direct the Corps to review the FUDS properties that were excluded in DOD’s initial survey to determine if any are training ranges that should be included in the survey. DOD provided oral comments that generally agreed with the need to clarify reporting on the status of the FUDS program and to review the unexploded ordnance projects that were excluded from its initial training range survey. DOD did not agree with the need to exclude from the list of completed projects those projects closed either as the result of a study or because they were determined to be ineligible. However, DOD did agree that it needs to clarify in future annual reports to the Congress that the restoration efforts on some projects were completed with a study phase and not a cleanup action. DOD did not specifically address how it would report on the ineligible projects that were being reported as completed. DOD also provided a number of technical comments and clarifications related to specific numbers and dollar figures in the report, which we addressed as appropriate in the body of the report. The scope of this review encompassed all potentially eligible properties included in DOD’s FUDS inventory as of the end of fiscal year 2000. To obtain information on the number of potential FUDS properties that are eligible and require or have required cleanup and on the geographic distribution, by state, of FUDS properties, we relied primarily on the Corps database of FUDS properties. To obtain information on those FUDS properties that contain or contained ordnance and other explosive wastes, we also relied on the Corps database of FUDS properties and on a database constructed by the Corps to respond to DOD’s range survey. We then compared those databases to determine which properties were included as part of the range survey and which were not. For those that were not included, we reviewed the property name and project description information to determine if there were additional properties that could be ranges based on these descriptors. The data in this report represent a static point in time—the end of fiscal year 2000. The Corps database of FUDS properties is used by the Corps on a daily basis to plan, schedule, and monitor the FUDS program, so there are constant changes and updates. Consequently, the numbers presented in this report may vary somewhat from other published sources; however, such variations represent the changing status of individual properties and projects, not material changes in the overall program status. On an overall level and as a measure of the FUDS program’s scope and efforts, we believe that these data represent a reasonable picture of the program at the end of fiscal year 2000. The Corps database of FUDS properties incorporates data from a previous Corps effort that did not contain all of the various categories of data in the current database. Consequently, for some properties and projects, particularly those that are no longer active, some information is dated and may not reflect current property conditions. We reviewed the Corps’ policies and procedures to verify the reliability of these data and found them to be reasonably accurate for our use. To the extent that we found material errors in the data, we worked with the Corps to correct those errors. We did not, however, attempt to independently assess the reliability of the data. We also acquired and reviewed program documents and interviewed Corps officials from headquarters, division, and district offices to obtain information about the FUDS program. We did not ask state officials to verify or confirm the Corps data for this review. We also contacted DOD and Environmental Protection Agency officials about aspects of the FUDS program. We conducted our review from November 2000 through May 2001 in accordance with generally accepted government auditing standards. As arranged with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. We will then send copies to the Secretary of Defense; the Director, Office of Management and Budget; the appropriate congressional committees; and other interested parties. We will also provide copies to others on request. Appendix I contains summary data on all 9,171 properties identified for potential inclusion in the FUDS cleanup program. The properties are listed by state, the District of Columbia, and six U.S. territories and possessions. For each property, the data include the property name, Corps’ property number, the county and congressional district where the property is located, the eligibility status, and whether hazards are present. Also included for eligible properties with hazards are the number of eligible cleanup projects, the actual cleanup-related costs incurred to date, and the estimated cost to complete the cleanup projects. All information is reported as of the end of fiscal year 2000. Appendix I is available only on the Internet at http://www.gao.gov/ GAO-01-1012SP/. | The U.S. Army Corps of Engineers estimates that it will spend as much as $20 billion to clean up contamination at thousands of properties that were once owned, leased, or operated by the Defense Department (DOD). These properties contain hazardous, toxic, and radioactive wastes in the soil and water or in containers, such as underground storage tanks. The Corps is responsible for cleaning up the hazards, including removing underground storage tanks. DOD's annual report on its environmental restoration activities can provide a misleading picture of the Corps' accomplishments. DOD's accounts of completed projects include projects that were ineligible or that did not involve any actual cleanup effort. As a result, the impression is that--after 15 years and expenditures of $2.6 billion--more than half of the projects at formerly used defense sites have been completed. In reality, only about 32 percent of those projects that required actual cleanup actions have been completed, and those are the cheapest and least technologically challenging. The Corps estimates that the remaining projects will cost more than $13 billion and take upwards of 70 years to complete. The Corps' reporting of completed projects reflects DOD's reporting policies for all of its environmental cleanup programs, including those at closing bases and active installations. As such, progress on those cleanup programs may not be accurately pictured either. In addition, DOD's range survey did not include all formerly used defense sites properties that may contain unexploded ordnance and could be former training ranges. Consequently, DOD's inventory of training ranges is likely incomplete, and its estimated cost to clean up these ranges is likely understated. |
Federal agency use of ESPCs was authorized by the Congress to provide an alternative to direct appropriations for funding energy-efficiency improvements in federal facilities. Many agencies were hard-pressed to pay for planned maintenance and repairs in their facilities, let alone make more significant building improvements. As a result of this situation, many federal facilities were in a state of deterioration with agencies estimating restoration and repair needs in the tens of billions of dollars. Although energy-efficiency improvements were likely to save money over the life of the investments and replace aging infrastructure, budgetary constraints prevented agencies many times from receiving appropriations for such investments. Under the ESPC legislation, agencies could take advantage of private-sector expertise, often lacking at the agencies, with little or no upfront cost to the government. Under these contracts, private-sector firms are supposed to bear the risk of equipment performance in return for a share of the savings. This arrangement permitted agencies to meet mission requirements and upgrade their energy efficiency to reduce energy usage at the same time, while recognizing only the first year’s cost upfront in the budget. The Congress authorized agencies to retain some or all of any annual savings available after required contractual payments to the energy services companies have been made. To begin an ESPC project, agency officials work on their own or with the assistance of one of the federal contracting centers at the U.S. Air Force, the U.S. Army Corps of Engineers’ Huntsville Center, the Navy, or FEMP, to choose an energy services company for the project and to identify the energy-efficiency improvements the company will finance for the agency. Usually, multiple companies submit initial proposals that include information on their qualifications and preliminary cost and savings projections for the project. During this phase, all costs are borne by the companies. To continue developing the project, the agency chooses one company and agrees to pay for a detailed energy survey. According to contracting center officials, this survey typically takes up to 1 year and includes such items as an assessment of baseline energy use and cost, projections of energy use and savings once the improvements have been put in place, maintenance schedules, and prices. Improvements must be “life-cycle cost effective,” that is, the benefits must meet or exceed total costs over the contract. Determining life-cycle cost effectiveness is an agency responsibility, but the agency can request this service from the company, generally for a separate fee. A final proposal that includes the detailed survey becomes the basis for comment and negotiation between the agency and/or contracting center and the company. Included in these negotiations are such contract terms as the “markups” added to the direct cost of each improvement to cover the energy services company’s indirect costs and profit associated with its implementation, operations and maintenance arrangements, guaranteed savings amounts, financing, and methods to verify that savings are achieved. Once the agency and energy services company have reached final agreement on contract terms, the company designs and installs the energy- efficiency improvements and tests the improvements’ operating performance. Agency officials review test results and have the company make any necessary corrections. To install, test, and accept the improvements typically takes up to 2 years to complete. Upon accepting the project, the agency starts payments to the company, which must be supported by regular measurement and verification reviews. Although agencies may develop an ESPC themselves, doing so can be a complicated process; consequently, most agencies seek assistance from one of the contracting centers at DOD or FEMP. To streamline the procurement process, three of these contracting centers—Air Force, U.S. Army Corps of Engineers’ Huntsville Center, and FEMP—have awarded super ESPCs, from which multiple projects can be developed, to prequalified energy services companies in different regions of the country. The super ESPC awards to selected energy services companies complied with Federal Acquisition Regulation rules and requirements for competition. With these multiple-award contracts in place, agencies can implement ESPCs in a fraction of the time it would take to undertake an ESPC alone because the competitive process to select qualified companies has been completed and key terms of the contract broadly negotiated, such as setting maximum markups the companies may charge. In addition to managing the super ESPCs, the contracting centers support agencies in negotiating aspects of specific projects for a separate fee. For example, FEMP provides facilitation services, where a third party assists the agency and energy services company in agreeing on terms such as markup rates, financing options, and the appropriateness of plans to measure and verify savings for proposed improvements. In addition, FEMP issues guidelines, offers training, and provides other support to agencies using the FEMP super ESPC. Under an ESPC, company-incurred costs are paid from savings resulting from improvements during the life of the contract. These savings include such things as reductions in energy costs, operation and maintenance costs, and repair and replacement costs directly related to the new efficiency improvements. In addition to direct costs for the improvements, other costs that savings should cover include financing charges, monitoring services, and company-provided maintenance. Savings to an agency must exceed payments to the energy services company. By law, aggregate annual payments by an agency to both utilities and energy services companies under an ESPC may not exceed the amount that the agency would have paid for utilities without the ESPC. To ensure that energy savings cover the contract costs, companies are required to guarantee the performance of the new equipment and assume the risk for its operation and maintenance during the contract, even though the agency may perform the maintenance. Agencies still assume some risks, for example, for changes in utility rates and in hours of operation, over which the energy services company has no control. To measure and verify that the guaranteed savings are achieved, an agency compares baseline energy usage and costs prior to the ESPC with consumption and costs after the improvements have been installed. Typically, the company develops a baseline during its detailed survey, while the agency is responsible for ensuring that the baseline has been properly defined. The company then estimates the energy that will be saved by installing the improvements and calculates the financial savings expected in the future. At least annually, and sometimes more often, the company provides measurement and verification inspections and reports to the agency to substantiate the expected savings. Several measurement and verification protocols are available to determine energy savings. For example, under FEMP guidelines, four options are discussed that range in complexity and costs. The simplest, and perhaps least expensive, option is to measure the capacity or efficiency of the new equipment and “stipulate” hours of operation, expected energy consumption, and other factors rather than specifically measure them. Such stipulation is often used for simpler improvements, such as lighting. A more costly option might include constant monitoring of energy usage through metering or computer simulation models of whole building energy consumption. These methods may involve metering performance and operating factors before and after the installation of the improvements. When choosing among the alternatives, agencies balance the need for accuracy of their estimates with the costs of verifying those estimates. As part of its guidance, FEMP includes a matrix that describes a number of factors and associated risks involving financial, operational, and performance issues. When guaranteed savings are not achieved directly due to the performance of the equipment, the agency may withhold payment from the energy services company until the conditions are corrected. As we reported in December 2004, while ESPCs provide an alternative financing mechanism for agencies’ energy-efficiency improvements, for the cases we examined, such funding was more expensive than using timely upfront appropriations. This is because the federal government is able to obtain capital at a lower financing rate than private companies can. In this regard, our earlier work examining six projects found that financing these projects with ESPCs cost 8 to 56 percent more than had the projects been funded at the same time with upfront funds. The report noted that other factors, such as required measurement and verification of savings, may also affect the cost of projects financed with ESPCs. Agency officials commenting on this work agreed that timely upfront appropriations would be less costly than privately financing energy-efficiency improvements, if such appropriations were available, but stated that any delays in funding would result in a subsequent loss of energy and cost savings and these losses over time could offset the lower financing costs of the upfront funding. We did not analyze the likelihood nor the costs of such delays. During fiscal years 1999 through 2003, numerous agencies undertook ESPCs to finance energy-efficiency improvements, committing the federal government to annual payments totaling about $2.5 billion over the terms of these contracts. The use of ESPCs has been geographically widespread, with many types of equipment installed, and the extent of use has varied across the agencies. During our review, we found that there is no source of comprehensive data on federal agencies’ use of ESPCs, either in DOE, the contracting centers, or the agencies. DOE is required to collect data on the numbers, costs, and expected energy and financial savings for the new ESPCs that agencies undertake each year and report these data annually to the Congress. The data in DOE’s reports, however, were not adequate for our review for several reasons: they did not include some critical elements, such as actual energy savings; they were not cumulative from year to year; and they did not include ESPCs begun in fiscal year 2003 because DOE has not yet issued the report for that year. Similarly, the DOD and FEMP contracting centers’ data were not comprehensive enough for our purposes. The centers’ data were limited to those contracts for which they provided assistance; like DOE’s reports, they did not include certain critical elements; and, with the exception of Navy’s, did not incorporate information on modifications or progress on the contracts past the point at which the centers’ assistance to the agency was completed—usually only up to 1 year after the contract was signed. Furthermore, most agencies do not have a comprehensive, centralized electronic or paper system for tracking their ESPCs and keep some contract data only in project files at the facilities where the contracts are being implemented. Consequently, to examine ESPC use across the federal government, we obtained data from the four contracting centers and from the seven agencies included in our review. We combined the data from all the agencies into a consistent format, deleted duplicate records, and performed basic tests to ascertain the reliability of the data. Although the data for some projects were incomplete, the overall results of our analyses appear to be consistent with information published from other sources. The results of our analyses follow. During fiscal years 1999 through 2003, 20 agencies undertook 254 ESPC projects to finance investments in energy-efficiency improvements. The ESPCs commit the federal government to annual payments totaling about $2.5 billion over the terms of these contracts, conditional on either the savings guaranteed in the contracts being verified or as stipulated in the contracts. Because energy services companies are accountable for guaranteeing the performance of the equipment installed, if savings are reduced due to equipment performance, the company must correct any related problems. In some instances, the contract may stipulate an amount of savings that will be achieved. In the event that this stipulation overstates actual savings, the agency must still make payments based on the amount of savings stipulated. However, if stipulation understates savings, the agency obtains the additional savings at no additional cost. Table 1 shows the numbers and costs of ESPCs the 20 agencies undertook, as well as the percentage of total ESPCs attributable to each agency. The size of ESPC projects varied greatly over the 5-year period, ranging from $241,943 to $137,515,074. About 72 percent of the projects in this time period are valued at $10 million or less, as shown in figure 1. The contract length of all ESPC projects ranges from 5 to 25 years, with an average of 15.8 years. Using the ESPCs, agencies financed energy-efficiency improvements that have been or are in the process of being installed at locations in 49 states and on U.S. military installations in Guam, Cuba, Italy, Germany, and Korea. Numerous types of energy-efficiency improvements were financed, including replacement of boiler and chiller plants for heating and cooling, energy management control systems, geothermal heat pumps, and lighting. In the largest ESPC project during the 5-year period, the Marine Corps committed to spend almost $138 million at a facility in California to install a cogeneration plant, solar hot water and photovoltaic systems, heating, ventilating, and air conditioning at various sites, and waste water pump upgrades. This ESPC project, awarded in July 2002, has a contract term of 18 years. The extent to which agencies have used ESPC financed projects has varied, as shown in table 1. DOD agencies have used the contracts the most, undertaking about 153 ESPCs to finance about $1.8 billion in costs at about 100 military installations during the 5-year period. DOD officials told us they relied heavily on ESPCs to achieve energy infrastructure improvements, in part because of difficulties they encountered in obtaining adequate upfront funding for energy projects that were not categorized as being mission-critical. They noted that these improvements also helped the agencies meet other national energy goals as well. After DOD, the General Services Administration (GSA) and Veterans Affairs (VA) used ESPCs the most during the 5-year period, undertaking 30 and 24 projects, respectively. Together these agencies account for about 21 percent of projects. Both GSA and VA officials told us that adequate upfront funding for their energy projects has been difficult to obtain in recent years. At the same time, they have faced increasing backlogs of these projects in their capital management plans. Consequently, the agencies have moved toward using more ESPCs to meet mandated energy reduction goals and to make badly needed upgrades to aging and inefficient equipment. DOE’s departmental ESPC projects represent about 4 percent of the total projects undertaken over the period, valued at about $38 million. DOE officials told us that the agency has mainly used ESPCs since 1999 to supplement limitations in upfront funding for energy-efficiency projects. After GSA and VA, among civilian agencies, DOE has a high percentage of federal facility square footage; however, the agency has not been among the largest users of ESPCs for two reasons. First, the agency has found it relatively easy to meet its mandated energy reduction goals because it has in recent years closed a number of its facilities, such as those producing nuclear weapons, that were no longer needed. Furthermore, many DOE facilities have negotiated low utility rates or are in regions of the country where utility rates are relatively low. This makes developing an ESPC for which savings will cover costs difficult, because the low utility rates hold down the amounts that can be saved with the energy-efficiency improvements. As a result, DOE’s major goal in using ESPCs, we were told, has been for energy infrastructure improvement. Of the seven agencies in our review, the Department of Justice (Justice) used ESPCs the least, undertaking only two ESPCs totaling about $43 million in costs. According to Justice officials, because many of their facilities are prisons, security concerns can make undertaking energy- efficiency projects on existing buildings difficult. Nonetheless, the agency undertook two ESPC projects in 2003, one each under the Bureau of Prisons and the Federal Bureau of Investigation. According to the officials, the agency undertook the ESPCs because it was concerned about meeting the mandated energy reductions, and upfront funding for energy-efficiency projects was decreasing. In addition, for one of the projects, the agency saw a chance to use an ESPC to accomplish environmental goals established by Executive Order 13123, such as making more use of renewable energy. In that case, the agency undertook a project at a California prison site. After the California energy crises in 2000 and 2001, the agency sought to decrease its dependence on the electricity grid, so the project included installation of renewable energy sources, including a wind turbine and photovoltaic panel, which furthered the agency’s energy security interests as well as helping it meet its energy reduction and environmental goals. Finally, five agencies—the Departments of Commerce and State, the Environmental Protection Agency, the John F. Kennedy Center for the Performing Arts, and the National Gallery of Art—that we did not contact for additional information for our review each undertook one project during the 5-year period. We did not receive cost data for the Kennedy Center. The other four totaled about $35 million in costs. Figure 2 shows agency use of the contracting centers at the Air Force, the U.S. Army Corps of Engineers’ Huntsville Center, the Navy, and FEMP for fiscal years 1999 through 2003. With the exception of 2002, the data show that, over the period, agencies increasingly used FEMP’s contracting center more relative to the other agencies’ centers. Although there was an average of 51 ESPC financed projects undertaken each year, there was a 54 percent increase in projects awarded from 2002 (37 projects) to 2003 (57 projects). According to agency officials, this increase was largely because agencies put significant effort into awarding ESPC financed projects, anticipating the sunset of the legislation on October 1, 2003. This was particularly true for ESPCs done through FEMP’s contracting center. As discussed previously, on October 28, 2004, ESPC authority was renewed through fiscal year 2006. ESPCs awarded by federal agencies to finance energy-efficiency improvements are expected to achieve energy savings worth at least $2.5 billion during the life of their contracts. Agencies estimate that they are annually reducing energy use by at least 9 million MMBTUs. Some savings are also expected to continue after the ESPCs end. Agencies receive other benefits through ESPCs as well, such as environmental improvements and better mission capability resulting from replacing aging infrastructure with more reliable equipment. Although these benefits could be achieved through up-front appropriations at a lower cost, this funding has often not been available on a timely basis. Furthermore, ESPCs provide additional benefits not typically associated with investments purchased through upfront appropriations, such as shifting some of the performance risk of the equipment to the energy services companies and allowing agencies to more easily combine multiple energy-efficiency improvements into an integrated package. Over the life of the ESPC financed projects included in our review, agencies expect to achieve energy savings worth at least $2.5 billion and amounting to over 9 million MMBTUs, as shown in table 2. These estimated savings are likely to be understated because the agencies did not report financial savings for 17 projects and energy savings for 45 projects. The military services account for about 64 percent of the financial savings and about 71 percent of energy savings for the ESPCs awarded during the 5 years. Savings at some specific locations are expected to be substantial. For example, reported data show that total estimated savings at each of three military installations will exceed $100 million, ranging from $117 to $138 million for a total of $378 million. The ESPC at Elmendorf Air Force Base in Alaska is expected to reduce the base’s energy consumption by more than 1 million MMBTUs per year, which are valued at $123 million for the 22-year contract term. According to the base energy manager, this is the largest ESPC ever awarded by the Air Force. The installation of energy efficient equipment has already resulted in some energy savings and is expected to result in further savings, lower utility bills, and reduced operations and maintenance expenses. Over the 5-year period, the agencies estimate they reduced their energy use by at least 9 million MMBTUs annually. According to agency officials, these reductions have assisted, and will continue to assist, agencies in meeting their mandated goals for reducing BTUs of energy used. For example, agencies reported that they exceeded by 4 percent their goal for fiscal year 2000—a 20 percent reduction in BTUs of energy consumed relative to their fiscal year 1985 usage. Agencies report their progress in meeting the goals by each agency as a whole and do not indicate the portion that could be attributed to the agency’s ESPCs. However, officials we interviewed representing most of the agencies believe they would not have met the 2000 goal without the contracts. Furthermore, they expect their ability to meet the remaining goals—30 percent reduction by fiscal year 2005 and 35 percent by fiscal year 2010—depends largely on being able to use ESPCs to finance energy efficiency improvements. DOD officials told us that in recent years ESPCs have accounted for over half of DOD agencies’ annual energy savings. Furthermore, they believe that DOD will have significant difficulty in achieving the 2005 energy reduction goal because a number of ESPC projects planned for fiscal years 2004 and early 2005 were not undertaken because authority for ESPCs was suspended during that time. DOE is an exception—according to DOE officials, the agency has already met its goals for 2005 and 2010, largely because it has closed facilities that produced nuclear weapons, thereby significantly reducing the energy consumed by the agency. Agencies may also benefit from substantial energy and financial savings once the contracts are paid for. Energy and related financial savings should continue beyond a project’s payback period through annual energy saving, as well as through reduced operations and maintenance costs. Currently, financial savings retained by agencies are small because most agencies use their savings to pay off their contracts with the energy services companies as quickly as possible, thereby reducing debt more rapidly and saving interest costs to the government. For example, GSA, which currently pays energy services companies 98 percent of the agency’s annual financial savings from ESPCs, estimates that it will save about $16 million annually from its 30 projects after it has repaid the companies. Similarly, data provided by the Air Force and the Navy show expected annual financial savings for those agencies of almost $45 and $40 million, respectively, once the contracts are paid for, and Army and Marine Corps projects also expect to garner financial savings past the contract terms. In another instance, officials at Fort Bragg told us that they would continue to obtain lower utility rates, which were negotiated as part of the ESPC by the energy services company, even after the contract period. In addition to energy savings and lower overall utility costs, ESPC-financed projects, like projects funded with upfront appropriations, can provide agencies with environmental benefits through installation of newer, cleaner technologies. The ESPC financed projects in our review, we were told, are assisting the agencies in eliminating environmental hazards, reducing outdoor air pollution, and improving indoor air quality. The project at Elmendorf Air Force Base allowed the Air Force to replace old steam plants insulated with asbestos, a known environmental hazard. In another instance, in the ESPC at Portsmouth Naval Shipyard, in Maine, the Navy installed a cogeneration unit for generating power. As a result, the shipyard eliminated its reliance on bunker fuel oil and is producing significantly fewer greenhouse gas emissions. ESPC-financed projects also allow agencies to replace aging infrastructure without having to obtain upfront appropriations. Officials at six of the seven agencies in our review noted the importance of using ESPCs to replace aging infrastructure. The upgrades, the officials told us, improved the agencies’ abilities to carry out their primary missions and provide a more comfortable work environment for employees. At Elmendorf Air Force Base, for example, the energy manager told us the base was able to replace a 50-year-old cogeneration power plant with a new, much more efficient decentralized natural gas system. Navy officials told us they faced a similar situation with a power plant built in 1945, which was failing at their Portsmouth facility. The backlog of maintenance work on the power plant was continuing to increase. Due to the geographic location in Maine, with severe winter weather and the continual repairs needed on the old power plant, an upgrade was essential to support the nuclear submarines at the shipyard. The officials noted each day’s loss of power cost the shipyard $1.5 million. By using an ESPC to replace the power plant, the base was able to eliminate eight full-time staff positions (saving about $448,000 annually) because the new power plant is easier to operate and does not require frequent emergency maintenance, as the old one did. Although the benefits from ESPC financed projects discussed above could be achieved using upfront funding, agencies have found that sufficient amounts of such funding were generally not available—making it necessary for the agencies to use ESPCs to supplement the upfront funding they receive in order to obtain these benefits. A study by Oak Ridge National Laboratory that compared ESPCs with upfront funded projects concluded that when sufficient upfront funds are not available, the most expensive choice may be to do nothing, allowing inefficient equipment to remain in service and wasting funds on unnecessary energy use and emergency repairs and replacement. Officials at six of the seven agencies we reviewed—the Air Force, the Army, GSA, Justice, the Navy, and VA—told us that, in spite of attempts to obtain upfront appropriations for energy projects, adequate amounts of such funds were generally not available. For example: GSA officials said the agency received no funds for any energy- efficiency work included in their capital management plans for fiscal years 2002 and 2003, although they requested $32 million and $8 million, respectively. As a result, they used other financing options, such as ESPCs. Army officials at Aberdeen Proving Ground noted that failing heating and air conditioning systems in the base’s family housing had become a fire hazard and were too expensive to maintain. These officials said they repeatedly attempted to obtain upfront appropriations for the upgrades but, being unsuccessful, negotiated an ESPC. Navy officials told us their planned investments for energy-efficiency projects range from $100 million to $150 million annually in order to meet their BTU reduction goals. However, because the Congress will only provide $50 million for all of DOD, and the Navy only gets about $15 million of that amount—or none, as in fiscal year 2000—the Navy questions the usefulness of requesting the funds while foregoing making energy-efficiency improvements. Furthermore, officials at both the VA and the Navy told us that even when they can obtain upfront funds, the project typically takes 4 to 5 years to obtain approval and be completed, compared with about 2 years for an ESPC. Navy officials pointed out that up-front-funded projects take longer because projects must be submitted 2 years in advance of the budget year; in addition, they said that most projects are not fully funded and have to be resubmitted in subsequent years. According to these and other agency officials, their agencies were achieving savings through lower utility bills and reduced operation and maintenance costs during the extra years that equipment installed under ESPCs was operational. DOE’s Oak Ridge National Laboratory reported in March 2003 that, on average, upfront funded projects that were approved took 63 months to award, design, and construct, compared with 27 months for ESPCs. In a recent report, GAO performed a case study analysis of six ESPC projects and compared the actual costs of financing the energy-efficiency improvements incurred in the ESPCs with an estimate of what the financial costs would have been had the improvements been paid for through timely upfront appropriations. We found that the financial cost to the government of private financing was significantly higher than the financial costs of upfront appropriations and also that monitoring and verification costs—included with ESPCs but typically not included in projects paid for with up-front appropriations—also added to the cost difference between private versus upfront financing. Specifically, our case studies found that ESPC financed projects increased the government’s cost of acquiring the energy-efficiency improvements by 8 to 56 percent compared to timely, full, upfront appropriations. Our analysis assumed that the energy savings and other benefits associated with the energy-efficiency improvements were independent of how they were financed. While our earlier work found higher financing costs associated with the use of ESPCs, a recent study of ESPCs, undertaken by the Lawrence Berkeley National Laboratory, analyzed both the costs and government benefits of 109 ESPCs and compared the net benefits of these projects with the net benefits under several alternative scenarios involving direct, upfront appropriations. The study assumed that the performance of the equipment installed was dependent to varying degrees on which financing method was used. Specifically, they evaluated scenarios in which energy savings from equipment installed using upfront appropriations decay over time (1 or 2 percent per year) because projects funded up-front typically do not include the same level of monitoring and verification to ensure sustained performance of the equipment. The study concluded that “delays of more than one year in obtaining congressional appropriations result in reduced net benefits relative to ESPC-financed projects.” Although we did not independently verify all of the study’s assumptions, data, and results, we did review several studies of energy audits that the Lawrence Berkeley authors used to support their assumption regarding savings decay to verify their assumption that energy systems’ savings decay in the absence of proper monitoring and verification. In discussions with experts on the performance of energy equipment, we were told that many of the energy- efficiency improvements require careful monitoring and verification to ensure that they perform up to their specifications and that, without such monitoring and verification, energy savings would indeed decay over time, in some cases very quickly; however, we found that agencies often lack sufficient expertise in monitoring and verifying performance of energy equipment on their own. Thus, although we could not conclude on the actual extent of savings decay for upfront-funded projects, there is evidence that savings decay occurs. While it is likely that agencies could purchase monitoring and verification services from the private sector in the case of equipment paid for with up-front appropriations, they have typically not done so in the past and the additional cost of doing so is unknown. We cannot conclude definitively the extent to which decreased savings decay and other benefits from ESPC-financed projects may offset the significant savings achieved from using upfront funding that we found previously in six case studies. ESPCs also provide two benefits not typically associated with investments purchased through upfront appropriations: (1) some performance risk is shifted from the government to the energy services companies and (2) agencies find it easier to combine multiple energy-efficiency improvements into an integrated package. First, as noted by agency officials and industry experts, because ESPCs require energy services companies to guarantee equipment performance over the lifetime of the contract, which in turn yields energy savings, agencies benefit as these risks are shifted from the agencies to the companies. As part of these guarantees, energy services companies are ultimately responsible for insuring that adequate operations and maintenance are conducted and for any repairing and replacing equipment if it fails. These requirements reduce the risks from possible faulty engineering, poor equipment installation, or equipment failure. For projects funded with upfront appropriations, energy services companies are generally only responsible for equipment risks during the warranty period, which typically is shorter than an ESPC’s contract guarantee. While it may be possible to supplement upfront-funded projects with additional warranty or performance coverage, agency officials told us that this would add costs and typically is not done. According to FEMP ESPC program managers, ESPCs create an incentive for energy services companies to develop highly efficient improvements and maintain the equipment so that it is in peak operating condition. This incentive occurs because the companies’ compensation is directly linked to the savings achieved through their work. Officials from both the Navy and the Army told us that because the value of energy savings must cover the annual payments to the energy services company, the company bears the risk when it encounters problems. For any problems related to the performance of the equipment that are defined as company risks and that were not explicitly determined to be an agency risk, the agency can withhold future payments from the energy services company until the problem has been corrected. Officials at Fort Bragg told us that they withheld payment from a contractor for a short period until an equipment problem was fixed on their ESPC. In many cases, the agency, rather than the energy services company, performs the operations and maintenance. An official from the DOE departmental energy management program, however, noted that it is not altogether clear when a piece of equipment fails, whether payment to the energy services company can be stopped directly or whether a review of maintenance records, for example must be performed to determine if the agency or the company is responsible for the failure. Typically, when problems occur for equipment purchased with up-front funds, if the warranty period is over, the agency is responsible for fixing or replacing the equipment at its own expense. Second, with ESPC-financed projects agencies find it easier to bundle a number of energy-efficiency improvements so they can function as an integrated system. In this way, one energy services company is responsible for the guaranteed performance of all the equipment. Agency officials told us that, due to tight budgets, upfront funding is limited even when it is available and the agency can typically install only a few of the necessary energy-efficiency improvements. They said it may be years before the agency receives authority to fund additional projects and, due to the competition requirements of federal procurement practices, it is quite possible a different energy services company would be selected to install them. Besides potential problems of integrating the controls for system components installed by two different companies, some savings that would have been obtained if all energy-efficiency improvements had been installed without delay at one time are lost. Energy savings can be achieved more quickly through an integrated approach than implementing efficiency improvements on a piecemeal basis. The lack of a performance guarantee over the life of the equipment purchased with up-front funding and the uncertain, episodic nature of upfront funding can make those projects more risky and less capable of generating an integrated approach to energy management for new and existing equipment. Agencies generally believe that ESPCs’ financial savings cover the costs because they design their contracts to cover costs and because they must obtain verification reports from the energy services companies that confirm this point or take steps to correct shortfalls in savings. They cited examples of projects that realized savings in excess of costs and provided data on verified savings for most of their projects. However, the data provided were insufficient to conclude whether savings covered costs of the projects in our review. Furthermore, our work, agency audits of ESPCs, and agencies’ differing interpretations about the components of costs that must be covered by savings caused us to question whether savings consistently cover costs. FEMP officials recognize the difficulty in ensuring that actual savings cover costs and have formed a special working group to address uncertainties regarding savings. In response to statutory requirements, agencies design ESPCs so that savings are sufficient to cover costs. In addition, the agencies refrain from committing themselves to ESPCs when they determine beforehand that savings will be inadequate or when the payback will exceed their preferred time frames for the contracts. For example, a DOE official cited several departmental projects that advanced to the final proposal stage but that the agency dropped because the economics for the projects were either poor or the agency did not agree with the savings projections. For one project, the low utility rate (which reduced the amount of savings that could be accrued) and the high cost of performing the work in an area with access controlled for security reasons forced the project’s abandonment. In another case, the agency did not agree with the company’s projected savings and believed that very little savings would be achieved. FEMP officials noted a requirement for performing a life-cycle cost analysis of individual energy-efficiency improvements, which are then bundled to ensure that the project’s overall savings cover costs. Another reason for agencies’ general confidence regarding savings is that energy services companies are required to submit annual measurement and verification reports confirming the savings and, in case of a shortfall, take corrective steps to recoup the savings. These annual reports provide the specific figures on which agencies base their payments to the energy services companies. In some cases, the reports are updated quarterly to give the officials monitoring the project more current data on the performance of equipment, enabling them to spot shortfalls in savings and have the energy services company correct them quickly. In addition, agency officials cited projects that realized savings in excess of costs. For example, the ESPC at Fairchild Air Force Base in Washington State has garnered about $180,000 more per year than it cost. The extra savings have resulted from the equipment operating more efficiently than estimated and actual utility costs that were higher than estimated in the contract. We asked the seven agencies in our review to provide data on verified savings for each of their projects. In many cases, the projects have not entered their performance periods, so verified savings data are not yet available. To approximate the number of projects that should have verified savings available, we looked at the 111 projects (about 44 percent of the projects) that had been under way for 3 years or more and could reasonably be expected to have at least 1 year of verified savings to report. In this regard, the seven agencies reported verified savings for most of the 111 projects, but they did not provide cost data that could be compared with the annual verified savings. We did not take steps to obtain the data, which are contained in files at projects located across the country. Thus, we could not conclude from the data provided to us that verified savings were, in fact, covering the costs of these projects. Furthermore, while federal officials are expected to accompany energy services company officials when the data are being gathered for the reports to provide an extra level of confidence in the data’s validity, FEMP officials cautioned that this added check may not be happening as often as it should. An additional limitation of the data is that the measurement and verification process relies not only on actual measurements but on estimates as well. As will be discussed more fully later, estimates may be used extensively in this process, introducing the possibility of incorrect assumptions and errors in the calculations. Moreover, the process evaluates not only the performance of the equipment, but additional factors such as the cost of energy that affect actual savings. Agencies cited specific projects in which the savings have not covered costs. According to a DOE departmental official, savings for 4 of its 10 projects have fallen short of costs because of unexpected problems. DOE’s analysis has shown that, in three of the four instances where savings are inadequate, the shortfall has resulted from unpredictable mission changes in the use of the facilities. For example, in one of these cases, the discovery of beryllium contamination forced the closure of some of the buildings involved in the contract. Reductions in electricity consumption accounted for the fourth case. In this instance, in 7 out of 12 months each year, DOE is not meeting the minimum required demand cost that was projected and has to pay for the electrical demand it does not use. As a result, for 7 months of the year, the new equipment associated with the project is not providing any electrical demand savings, so the overall cost savings of the equipment is less than expected. In general, according to the DOE official, it is extremely difficult to accurately predict all the variables that affect energy savings over the 10 to 15 year ESPC contract term, so agencies have to bear some of the risk of inaccurate assumptions at the outset. While most agencies have not audited their ESPCs, the Army and Air Force audits of ESPCs have found several instances in which savings may not have covered costs. For example, a 2002 Army audit of a 1999 project covering five locations found the Army could pay about $96 million that may not be covered by savings over the 18-year life of the project because savings that the Army agreed to were overestimated. First, the report found the baselines were incorrect because the contractor inflated labor costs for the operation and maintenance baseline by $66 million over the life of the project. Second, it found the contractor also overstated the baselines for electrical consumption and water conservation by more than $30 million over the life of the project. This inflation of both baselines occurred, according to the report, because the agency relied heavily on the contractor to prepare them. Other major contributing factors, the report stated, appeared to be insufficient time to review contract proposals and a desire to award the contract and pay the contractor prematurely. As a consequence, the report concluded, the agency could pay for nonexistent savings over the term of the contract. As another example of questionable estimates, the contractor for an Air Force ESPC increased the original consumption baseline by over 11,000 kilowatts with no indication that Air Force officials questioned this adjustment. Poor documentation adds to the problem of ensuring that savings cover costs. For example, energy services companies at 8 locations reviewed in a 2003 Air Force audit reported savings of $6.7 million associated with $78 million in ESPC investments, but civil engineering officials could not provide support that they reviewed or validated these numbers. The auditors projected the results for the sample of 8 locations to the 36 included in their review and concluded that the lack of documentation made it impossible to assess the savings that the agency will receive for about $600 million in costs for energy efficient equipment. As noted in the report, this condition occurred because Air Force guidelines did not specifically require maintenance of baseline supporting documentation, a methodology for savings computation, or validation of cost savings. In response to recommendations in the report, Air Force officials stated that they were taking steps to correct these problems. However, we could not determine the status of the payments for either the Army or the Air Force projects in these audits because the audit documentation did not indicate whether payments were made despite the potential savings shortfalls. Accurately calculating financial savings is fundamentally difficult for agency officials to do. Major components of financial savings—baseline energy consumption, the consumption once the energy efficiency improvements have been installed, and the cost of energy associated with both the baseline and the later consumption—are partly stipulated, or estimated, rather than actually measured. In this regard, striking the “right” balance between stipulation (which is less costly but also less accurate) and measurement (which is more costly but also more accurate) is a challenge for agencies. To the extent that stipulation is used in lieu of actual measurement, according to DOE officials, savings calculations may be based on inadequate data or incorrect assumptions, which contribute to uncertainties about the actual savings. Agency officials commented on how difficult it is to identify the consumption and cost of energy, which forms the basic equation (consumption times cost) that establishes the energy-related baseline and the future financial savings. For example, contractual arrangements with regard to consumption can affect the savings. In the case of “take-or-pay” contracts, agencies may have to pay for a certain amount of projected minimum demand even if they do not actually use it. As noted earlier, this situation has occurred in one of DOE’s ESPCs and has reduced its savings to the point where savings will not cover payments under the contract. The cost of energy, as shown in utility rates, can also be difficult to determine. Given their potential complexity, it is easy for energy services companies or federal officials to provide incorrect utility rates, which in turn will have important consequences for the level of savings. Rates need not only to be determined accurately to establish the baseline but also projected as accurately as possible into the future to determine eventual savings. These rates are projected 10 years into the future in ESPC contracts, according to agency officials, but the actual rates can change at any point during the contract period. Anomalies due to weather, fluctuations in energy prices, or other influences can affect the rates. In general, if utility rates go down or increase more slowly than projected, then the actual savings will not materialize. In essence, these rates are stipulated, and the agency bears the risk. Ensuring that the equipment installed under ESPCs has been adequately operated and maintained is essential for agencies and can affect whether savings cover costs. According to an expert who has worked on Army and Air Force ESPCs, calculations of guaranteed savings assume a high level of operation and maintenance activities, but rapid loss of energy efficiency if equipment output is not maintained can jeopardize savings. He said that typically a 10 to 20 percent degradation in savings occurs annually on a given ESPC in the event of improper operation and maintenance. He cited the virtual ruin of a chiller (for air conditioning) in only 3 years as a result of improper maintenance. Similarly, measurement and verification are critical in the longer term for achieving guaranteed savings. In determining these savings, however, energy services companies blend the use of measurement with stipulations in their reporting process. The expert who has worked on Army and Air Force ESPCs noted that, despite the importance of using measurement in addition to stipulation, there are numerous barriers to performing actual measurements. These barriers include a lack of appropriate metered equipment and reluctance by energy services companies to perform measurement and verification because it might work against their interests. An Air Force official observed that in his experience energy services companies prefer stipulation and have limited the number of actual measurement for projects as much as possible. A report for FEMP examining seven ESPCs noted the reliance on stipulation in the projects’ measurement and verification plans. The primary reason for using this method was its low cost; however, the report concluded that the large use of stipulated savings left the agencies at risk of unrealized savings. To help agencies use stipulation correctly, in 2000, FEMP issued supplemental guidance on measurement and verification that specifies that some stipulation may be used in lieu of measurement when there is a reasonable degree of certainty about the stipulated values, their contribution to overall uncertainty is small, and they are based on reliable and documented sources of information. DOD and FEMP recently established a special working group to address the uncertainties about actual savings. The Energy Savings Discrepancy Resolution Working Group, formed in late 2004, is developing approaches to compare projected and actual savings and to explain any deviations. Because it has just commenced these studies, the group has obtained preliminary results regarding only one project. The group found that the projected savings for this project were diminished by consolidations of agency missions, expanded construction, and new demands for energy that had nothing to do with the ESPC. Officials said they chose this project because it came with a well-developed baseline, which is often not available for careful evaluations of this sort. The statute governing ESPCs provides that “aggregate annual payments” under an ESPC may not exceed the amount the agency would have paid for energy without such a contract. However, agencies differ in their interpretation of this statute. In practice, it remains uncertain whether contract payments may be made only from utility savings resulting from the ESPC or from funds already earmarked for equipment replacement and other sources to reduce the length of the contract and finance charges. Within DOE, for example, disagreement about the interpretation of the statute is shown by a FEMP guide on the one hand and an opinion provided by DOE’s Office of General Counsel on the other. According to a DOE departmental official, the main source of guidance for agencies regarding lump-sum payments is FEMP’s “Practical Guide to Savings and Payments in Super ESPC Delivery Orders,” issued in January 2003. Section 3.6 explains that agencies may use existing funds that would otherwise be used for operation and maintenance and repair and replacement projects (1) to increase ESPC project investment and include a more comprehensive set of energy-efficiency improvements than would be possible otherwise, or (2) to lower the financed amount and shorten the term, thereby reducing interest costs over the term. The section adds that one-time energy-related cost savings are often applied as a preperformance-period payment to the energy services company. However, such payments may also be scheduled as payments during the contract performance period. Similarly, section 4.4.1 of the FEMP guide states that if appropriated funds are available for general maintenance, operation, repair, and replacement of energy-consuming systems (as opposed to being earmarked for a specific project via a capital line item), they may be used for payments to the energy services company. Adding that one-time savings and payments from general operation and maintenance and repair and replacement accounts merit further clarification, the discussion notes that the intent of the ESPC statute is to permit the use of funds available in general operation and maintenance and repair and replacement accounts that could be used for energy-related purposes for preperformance-period ESPC payments. It also notes that one-time payments scheduled during the performance period may not exceed the amount planned and budgeted in the general operation and maintenance and repair and replacement accounts for the avoided project. Despite the FEMP guide’s attempt to clarify allowable sources of funding for ESPC projects, some uncertainties remain. Even within DOE, for example, the General Counsel’s office expressed an opinion at variance with the FEMP guide. A memo from the General Counsel’s office to the assistant secretary for Energy Efficiency and Renewable Energy in August 2000 stated that, in the case of buyouts and buydowns in super ESPC projects, energy cost savings must exceed payments in each of the contract years. The memo added that, because ESPCs are performance contracts, payment is conditional upon the realization of energy cost savings. The memo stated that buy downs are in effect prepayments which, in any contract year, may not exceed guaranteed and verified energy costs savings for that year. The memo concluded that prepayments have the effect of paying a contractor before the savings have occurred and under this analysis such prepayments are prohibited. GSA’s policy regarding buydowns is drawn primarily from the FEMP guide. In GSA, the motivation for using the funds allowed by this guide is the low utility rates in some of its regions. These low utility rates reduce the savings accrued by a proposed project, necessitating a longer contract term so that sufficient savings can be generated for covering costs. According to GSA officials, the agency has used upfront buydowns frequently, which has enabled GSA to reduce the cost and length of its contracts. They noted that even a small buy down has a large impact over the typical length of such contracts. GSA officials told us that the lack of clarity regarding financial terms in earlier FEMP guidance led to GSA being unable to buy down ESPCs in some cases. One of GSA’s main complaints in this regard stemmed from inconsistencies across its regions about what funding sources could be applied to buy downs. Following comments to FEMP and FEMP’s revision of the guidance, GSA officials noted that there have been no complaints since October 2002. Asked if there are any remaining improvements needed for the sake of clarity, GSA officials told us that there is still some uncertainty about how much can be financed and how much can be bought down on any given ESPC project. The Navy has no written policy on the use of buydowns and defers to contracting officers to determine when additional payments can be made. Because of the lack of clarity in this area, the Energy Programs division director at the Naval Facilities Engineering Service Center has asked for written guidance from the Navy but has not received it. The director told us that contracting officers evaluate the legislation and the terms of the contract and apply them to individual contracts and situations. He said that there have been three different situations in which the Navy has used buydowns. First, before or during construction, the Navy has identified avoided costs for equipment whose purchase is already included in the budget but that will not be needed as a result of an ESPC. Funds associated with these avoided costs can be used to reduce the amount of money owed in the contract because the Navy views these avoided costs as resulting directly from the ESPC. Second, during the actual performance period of the contract, the Navy has used other utilities budget monies from its working capital fund and mission funding to reduce the amount of money owed. However, it has stopped this practice because GAO raised concerns about the money not being linked with savings from the ESPC. Third, in cases of terminating specific energy efficiency improvements or terminating a number of years from a contract, the Navy has used funds from its utilities budget. The division director stated that greater clarity regarding the use of funds to make additional performance period payments from the utilities budget, but not directly associated with the ESPC, would be helpful because these payments can reduce long-term financing costs and save money for the government. Agencies expressed concerns about the expertise and information needs of the agencies and insufficient competition among financiers and energy services companies, all of which can affect agencies’ ability to protect the government’s financial interests in using ESPCs. Regarding expertise and information, agency officials many times lacked technical and contracting expertise and information on past contracts needed to effectively evaluate the ESPC proposals and monitor the contracts for savings. As a result, they often relied on the energy services companies, calling into question the quality of the deals the officials struck and their certainty that guaranteed savings were realized. Expertise was lacking mainly because of inexperience with ESPCs, and information was lacking mainly because agencies are not required to collect and disseminate it. Regarding insufficient competition, agencies believe there may not be enough competition among finance companies and energy services companies. As a result, agencies may be paying too much for financing and other terms of the contracts and may be getting poor services after the contracts have been signed. In recognition of these shortcomings, the agencies are taking a number of corrective steps on an ad hoc basis and have developed an interagency steering committee to address some of them collectively. We did not assess the effectiveness of the agencies’ efforts. Those project officials we interviewed who were able to marshal the expertise and information they needed believe that having adequate expertise and information are critical to the success of the ESPC. For example, officials at the Portsmouth Naval Shipyard, which undertook a $43 million ESPC in 1999 to upgrade its power plant system, relied on the U.S. Army Corps of Engineers’ Huntsville Center and the Navy contracting centers for technical and contracting support and on a consultant for engineering support and analysis of utilities forecasting. According to the Navy official who developed and oversees the Portsmouth project, the expertise provided by the three sources was essential to the success of the project. In particular, the consultant’s analysis of electricity rate projections, made possible because of the consultant’s knowledge of utility markets in New England, allowed the Portsmouth officials to question the energy services company’s rate projections and negotiate more favorable rates for the ESPC. As previously discussed, developing and monitoring an ESPC are difficult, requiring both technical and contracting expertise. In particular, for the development phase of ESPCs, we learned that agencies frequently had difficulty with technical responsibilities such as accurately calculating energy-use baselines and forecasting utility rates. For example, the Air Force and Army audits of ESPCs noted a number of instances in which baselines were incorrectly established, and numerous officials told us how difficult it is to accurately establish these baselines. Along those lines, the manager of DOE’s departmental energy management program told us that officials at the project level do not always have the necessary expertise to forecast utility rates and, given the complexity of forecasting these rates, particularly over the long terms typical of ESPCs, it is easy for the officials to agree to incorrect estimates. ESPC experts at DOE’s Oak Ridge National Laboratory agreed, saying it may be unrealistic to expect a government contracting officer to be able to effectively negotiate some contract terms such as utility rates because they are technically difficult to understand and forecast. Regarding monitoring ESPCs once the energy-efficiency improvements are in place and operating, the measurement and verification reports the energy services companies submit to substantiate savings pose a challenge for agencies because of their technical nature. A number of the officials we interviewed told us that the level of expertise at the project level is often inadequate to perform a thorough evaluation of the measurement and verification reports. The manager of DOE’s departmental energy management program noted that, in the past, DOE has not reviewed measurement and verification reports. The challenge of effectively reviewing these reports, however, has led DOE to consider requiring that DOE headquarters become involved in measurement and verification evaluations. In addition, according to an expert in measurement and verification for the Air Force, lack of technical knowledge is the primary cause for agencies’ failure to conduct appropriate measurement and verification oversight. In this regard, a lack of basic adherence to measurement and verification plans has also been observed. The project manager of the Air Force audit noted that, among the eight bases included in his review, only one had properly followed its plan. Another area requiring technical expertise involves a careful balance between stipulation and measurement and striking this balance has been difficult for agencies. According to DOE officials, key guidelines for measurement and verification do not define the best method for each energy-efficiency improvement that balances the trade-offs between cost and accuracy. Consequently, the “right” amount of measurement and verification for many improvements remains uncertain and requires expertise to determine in each case. Agency officials have generally agreed that measurement and verification, at least in the first years of using the super-ESPC contracts, tended to rely more heavily on stipulation than on actual measurements for determining long-term savings. An Air Force official told us that, in his view, the heavy reliance on stipulation during the earlier years of the program worked to his agency’s disadvantage with regard to savings. In more recent contracts, however, he believes that a better balance between stipulation and measurement has been reached because there has been a greater reliance on expertise in this area. In some cases, we were told, the officials may have the technical, but not contracting, expertise they need. Managers of the VA’s ESPC program are confident that the agency’s project-level officials have enough engineering know-how to understand the technology and construction process involved with ESPCs; however, the Managers are concerned that project level officials do not understand the financing, markups, or other aspects of the business end of ESPCs well enough, giving the energy services companies an advantage over the agency officials, who, in turn, may not be able to make the best business decisions for government. For example, according to the Manager of DOE’s departmental energy management program, in his experience, markups and financing rates often go unchallenged by project- level staff, even though they are negotiable, because the project officials do not have the expertise to challenge them. Furthermore, officials who oversee GSA’s energy program told us that GSA energy managers have had to negotiate with energy services companies on markups and financing terms, even though they were not adequately trained in that contracting technique. Related to expertise, ESPC project-level officials also may not have the information at their disposal that would help them develop the best possible contracts and effectively oversee contract implementation. A number of officials we interviewed said they had neither benchmarking data on prices and other contract terms agreed to for other ESPCs, nor knowledge of “lessons learned” on other contracts, making it difficult for the officials to evaluate project proposals and to negotiate effectively. Of the seven agencies included in our review, DOE, GSA, and the Navy compile and maintain some data on their ESPCs in one location. Although individual project files contain some data that could be used for benchmarking prices and terms, agencies are not required to compile and disseminate such information across their ESPCs, and the other four agencies told us they do not. Similarly, as discussed previously, although agencies are required to monitor the performance of energy services companies on individual projects to determine whether expected savings are being realized, they are not required to keep track of that information at the agency level. As a result, the agencies may not have historical information on contract performance to use in choosing energy services companies and developing terms of the contracts, such as the measurement and verification plans. Officials responsible for ESPCs do not always have the expertise and related information they need for a number of reasons. Many of the project- level officials are inexperienced with ESPCs. In that regard, several of the military project officials we interviewed said that their current experience is their first encounter with an ESPC, and the limited training they received did not adequately prepare them. Furthermore, DOD officials told us that because military staff are frequently reassigned after a few years, it is not likely that one person will be on site throughout the entire ESPC contract, and the officials expect their replacements to be similarly inexperienced with the contracts. Further exacerbating the problem, we were told that many of the military and civilian officials charged with developing and overseeing ESPCs only work on the contracts part-time so the efforts they can devote to the process are limited. Most agencies do not require their officials to use the contracting centers in DOD and FEMP when developing ESPC projects; nonetheless, most of them do. In the case of interviews with officials from 27 projects in which the officials discussed their use of contracting centers or other sources of expertise in detail, officials from 26 of these projects said they found the expertise helpful. However, for 13 of the 26 projects that got assistance, officials cited areas of the ESPC development process in particular for which they could have used more expertise. For example, GSA and VA officials told us that their FEMP project facilitators, which cost the agency $30,000 for each project, did not perform some functions that the agencies thought would have been beneficial, such as preparing estimates of project costs or advocating for the agencies during contract negotiations. Some project officials that used the contracting centers found the centers to be inadequately funded. For example, one Air Force project official told us that the Air Force’s center provided the project with excellent support, but could not visit the project site due to resource constraints. Similarly, another official told us he did not consider using the U.S. Army Corps of Engineers' Huntsville Center because he thought, based on his previous experience with the center, that it was understaffed and would not be able to devote enough effort to the project. As a result, we were told that agencies often rely on the energy services companies to provide much of the needed expertise to develop and monitor the ESPC projects, potentially raising a conflict of interest. One company representative told us that agency officials are typically not familiar with the energy savings potential of the new equipment being proposed for installation, for example, and another representative said that agencies need more ESPC expertise. A number of agency officials agreed that they rely on the energy services companies because they lack certain expertise themselves. For example, an Air Force official told us that project officials on remote air bases tend to have less-experienced staff and rely on the energy services companies for essential ESPC activities such as performing life-cycle cost analyses. Some agency officials we spoke with expressed concerns that there may not be enough competition among finance companies and that this could lead to higher than necessary financing costs for ESPCs. Some agency officials told us they believe the financing rates for ESPCs are high compared with rates to finance energy-efficiency improvements by other means. For example, according to VA ESPC program managers, the rates VA has negotiated to purchase energy-related equipment via another financing mechanism—enhanced–use leases—are generally lower than its ESPC rates. For the 241 ESPC delivery orders for which we received financing data, financing rates ranged from 5 to 13 percent, with an average across all projects of almost 8 percent. According to an ESPC expert at DOE’s Oak Ridge National Laboratory, improving the financing of ESPC projects is one of the most important ways to achieve a better deal for the government. Agency officials stated that there may be too few companies available to finance ESPCs. For example, the head of the Navy’s ESPC program told us that there have been difficulties in finding investors for its ESPCs and needs more investors in the program. VA officials responsible for overseeing the agency’s ESPCs echoed this concern. They believe there are only three or four “boutique” companies that specialize in financing ESPCs, and that the absence of more financing companies drives up the financing rates. They cited findings by a consultant the agency hired to review ESPCs that reported that the lack of competition among energy financiers creates higher rates, and the officials believe that injecting more competition into the process may result in better rates. The head of FEMP’s Super ESPC Program estimates that there are eight financiers that have provided bids for financing ESPCs. Agency officials also said they have seen little evidence that the energy services companies are seeking out the most favorable financing rates. Historically, energy services companies were not required to provide documentation of having sought favorable rates. According to a contracting officer who reviews the Army ESPCs, the agency has sometimes obtained better rates when it required at least three quotes from financiers. According to the Air Force and Navy officials responsible for reviewing ESPC proposals, some proposals did not contain sufficient information to adequately determine if the financing costs were reasonable. The Deputy Manager of DOE’s departmental energy management program told us that including documentation of competition among financiers in the ESPC proposals is needed to provide better assurances that the government is getting the best financing rates. In his experience, an energy services company often wants to use a single financier for all of its ESPCs, so he believes little or no competition for financing exists for those contracts. The energy services company representatives rebut this contention, saying they consistently seek the most favorable financing for ESPCs. They told us that lower financing costs allow more of the project’s savings to be spent on energy-efficiency improvements, from which the companies profit, rather than on finance costs. Other agency officials and representatives of finance companies and an energy services company have offered other explanations for why finance rates for ESPCs are as high as they are. For example, according to FEMP and GSA ESPC program managers, as well as representatives of the three financing companies in our review, agency officials generally do not understand that certain characteristics of ESPCs increase the risk of financing those contracts and may drive the rates up. Chronic late payments by agencies are one such characteristic. Others include the possibility that the agency will withhold its payments if the savings guaranteed in the contract are not realized, the additional uncertainty about contract performance due to the long contract terms typical of an ESPC, and the possibility that the agency will make unscheduled payments that will reduce the financier’s return on the contract. According to GSA’s ESPC Program Manager, these risk factors limit the number of companies willing to finance ESPCs, and the complexity of the contracts drives financing rates higher. We were unable in the scope of this work to determine the extent, if any, to which a lack of competition, rather than other factors, has caused finance rates for ESPCs to be higher than for other methods of financing energy- efficiency improvements. However, due to the large number of questions raised by agencies, we believe this question should be explored in more depth. Some agency officials also expressed concern that there may not be enough competition among energy services companies. In general, they told us there may be too few companies on the lists and those companies may be charging prices that are too high and providing inadequate services. Regarding the number of companies available, some officials told us that often only the large companies on the lists are willing to undertake ESPCs, effectively limiting agencies to three or four companies to choose from. FEMP ESPC program managers affirmed that it may be only the largest companies that can afford the extended negotiation and contract implementation periods of ESPCs before getting paid for their services. Further, GSA ESPC managers told us they have received complaints from energy services companies that would like to take on smaller ESPCs, but believe they are disadvantaged in obtaining that business because they are not on the lists and have not been given a sufficient chance to compete for that status. In that regard, officials from some agencies told us that the companies approved for the lists often will not bid on projects unless they are worth at least $1 to $2 million. As a result, the agencies must forego undertaking the smaller projects or combine multiple locations into a project to meet the threshold. According to DOE and GSA officials, it is more difficult to manage projects with multiple locations. In addition, according to the head of FEMP’s Super ESPC program, multiple energy services companies that did not compete in the original super ESPC competitions have communicated their desire to participate in a recompetition and to be added to lists of prequalified energy services companies. Some agency officials linked a perceived lack of competition among energy services companies with high markups and prices for other components of ESPCs and poor services—especially after the contract is signed. Regarding markups, energy services companies charge a percentage of the cost of each energy-efficiency improvement to cover company costs for, among other things, overhead, sales, markup on subcontractor-supplied materials and labor, and profit. Both the Army and FEMP super ESPCs contain pre-negotiated markup maximums that are intended to cap the amount of markup that the energy services company can add to the basic price of each energy-efficiency improvement covered by the contracts. FEMP’s markup maximums typically range from 26 to 31 percent—but may be as low as 5 percent and as high as 100 percent—depending on the energy-efficiency improvement on which they are based and the region of the country the improvement is implemented. The markup maximums the U.S. Army Corps of Engineers’ Huntsville Center provided to us range from 15 to 30 percent. A number of agency officials told us that, as a practical matter, the energy services companies resist agencies’ efforts to negotiate markups that are lower than the caps. According to an Air Force contracting center official, the Air Force super ESPCs do not contain prenegotiated markup maximums for energy-efficiency improvements and the negotiators that use the Air Force super ESPCs typically obtain more favorable markups than those who use the Army Corps of Engineers’ Huntsville Center’s or FEMP’s super ESPCs. To test this assertion, we examined data on markups in ongoing ESPCs that agencies reported to us. The reported markups ranged from 10 to 40 percent for projects under FEMP’s super ESPC, from 13 to 32 percent for the U.S. Army Corps of Engineers’ Huntsville Center’s, and from 9 to 29 percent for the Air Force’s. However, because the agencies did not report markups for all of the projects in our review and because data did not tie markups to individual energy-efficiency improvements, we could not determine whether the projects using the Air Force super ESPC actually resulted in more favorable markups. With regard to prices of some components of ESPCs, a number of agency officials we interviewed expressed concern about their ability to negotiate reasonable prices in their ESPCs. DOD agencies are required to give all energy services companies prequalified for a super ESPC an opportunity to participate in a limited competition at the initial proposal stage of a project. The competition is limited because, ostensibly, the companies have already passed government scrutiny in order to be included on the super contract. Although civilian agencies do not have the same requirement, they may choose to conduct a limited competition, and most did for the projects in our review. For the limited competition, the agencies provide the companies with such information as current utility rates and the types of improvements the agency is considering that the companies can use to develop their initial proposals. The initial proposals contain preliminary cost estimates and other information the agencies use to narrow the field to the single company it will do business with on the project. Prices are not discussed with any specificity until after the selected company has prepared its formal project proposal, even though the formal proposal can take more than 6 months to complete and review. By that time, we were told, the agency may feel pressure to continue with the company, possibly accepting prices that are too high because it is too costly to start over with another company. Lacking the ability to force the energy companies to compete more rigorously on prices, ESPCs may cost more than they should. Finally, some officials complained about the unsatisfactory services provided by the energy services companies. For example, one Air Force energy manager told us that the quality of work by the energy services company declined substantially after the delivery order was awarded. According to this official, the energy services company lacked the internal capability to properly do the work yet resisted hiring additional staff. In addition, the company did not use the subcontractor identified in the project proposal; as a result, the agency could not determine if the costs claimed by the company were valid. Some other problems cited by officials included inflated costs and over-billing for equipment and labor, insufficient and/or redundant design work, substitution of cheaper materials, untimely responses, and disruptive staffing changes. None of the companies on the super ESPC lists have had to re-compete for their positions on the lists since they won them 6 to 9 years ago, and the re- competitions planned for them will not occur for another 1 to 2 years. The companies on those lists have not changed unless they merged with others, went out of business, or chose to be taken off the list. While there are no requirements for how frequently the super ESPCs must be put out for competition, GSA’s practice regarding its contracts for the Federal Supply Schedule, which are multiyear contracts similar to the super ESPCs, is to renegotiate the contracts every 5 years to help ensure the contracts remain competitive. According to the head of FEMP’s Super ESPC Program, DOE policy calls for re-competing contracts such as the super ESPCs every 5 years. Our own analysis of agency data on ESPC use indicates that ESPC contracts appear to be highly concentrated among a relative few companies in some regions. We calculated the Herfindahl-Hirschman Index (HHI)—an index used by the Federal Trade Commission and the Department of Justice to evaluate mergers—for each of the six regions defined by the FEMP super ESPCs. In four of the six FEMP regions, the HHI was above the level at which industries are typically considered to be moderately to severely concentrated. While such measures do not by themselves indicate a lack of competition, they do suggest that a more complete evaluation of the competitiveness of the ESPC contracts is warranted. Individual agencies have taken steps to address concerns about expertise and related information and competition. Among other steps, to bring expertise and information from previous ESPCs to bear on new ones being undertaken by their agencies, DOE and the Navy each require ESPC proposals be reviewed by experts either in-house or at FEMP and do not allow the projects to proceed into implementation without approval of these experts. DOE and GSA compiled lists of lessons learned and have shared them among project officials within their agencies. In addition, the Air Force, the Army, the Navy, DOE, GSA, and VA each have begun requesting evidence of competition for financing rates before they will agree to an ESPC for their agencies. Furthermore, rather than relying exclusively on the super ESPC contracts, officials from VA are pursuing alternatives to introduce price competition into the process. The contracting centers have also taken steps to bolster the expertise and information available to their officials and to address the competitiveness problems with the super ESPCs. Most notably, all the centers have, among other things, issued guidance to help agencies with developing and monitoring their ESPCs and begun requiring that project proposals contain documentation of multiple financing bids. Furthermore, the centers are working to have newly competed super ESPCs available to agencies between fiscal years 2007 and 2008. The U.S. Army Corps of Engineers’ Huntsville Center plans to have its new super ESPCs in place by the beginning of fiscal year 2008 and has begun that process. According to a U.S. Army Corps of Engineers’ Huntsville Center contracting office official, the center has not recompeted its super ESPCs because the current contracts do not expire until the end of 2007 and developing the revisions to the contracts has proven to be a slow process that requires coordinated input from multiple ESPC experts and contracting centers. FEMP also plans to recompete its super ESPCs. It plans to begin the process in 2006 and have the new contracts in place sometime in 2007. According to the FEMP Super ESPC Program manager, FEMP has not recompeted its super ESPCs to date primarily because FEMP has focused its efforts on helping agencies undertake successful ESPCs and developing guidance for the agencies to use. The Air Force does not presently plan to recompete its contracts but will reconsider that decision over the next 2 years. According to managers of Air Force’s contracting center, Air Force ESPC projects are increasingly using FEMP’s super ESPCs because doing so provides the project-level officials with more contract and energy services company options. Consequently, rather than re-competing the Air Force super ESPCs, the managers may begin to phase out their agency’s use of the Air Force super ESPCs as they increasingly use FEMP’s with Air Force-specific clauses added. Although most of the steps agencies and contracting centers have taken to address expertise, information, and competition needs have been ad hoc, they have recently begun to address them more collectively via an interagency steering committee and its working groups. The purposes of the steering committee include sharing experiences and lessons learned among the federal agencies that use ESPCs the most, identifying process and procedural improvements, and developing best practices. The steering committee plans to develop performance metrics by which its efforts can be evaluated by June 2005. In addition, the steering committee and its working groups have accomplished some of their objectives to date. For example, the working group on measurement and verification issued a template standardizing the measurement and verification process. Each of the contracting centers used some of the group’s recommendations when it developed new measurement and verification guidance. See table 3 for a more complete list of steps the contracting centers have taken. We believe that many of the steps the individual agencies and contracting centers have taken to address expertise, information, and competition issues promise to help improve those areas. However, because of their ad- hoc nature and because many are relatively new and untested, we did not attempt to assess their effectiveness. ESPCs provide a valuable and practical tool that federal agencies use to meet energy reduction, environmental, infrastructure, and other goals. Clearly, agencies that have used ESPCs to install more efficient, energy- saving equipment have reduced their energy consumption and associated environmental impacts. Further, by using private financing, agencies have also been able to more quickly and consistently replace an aging and energy-wasting infrastructure—an infrastructure that the agencies have identified in their capital management plans as being in need of billons of dollars of repair and restoration. While using ESPC-financed projects has permitted agencies to reduce energy consumption and achieve other goals, the extent to which savings cover costs as required by legislation remains uncertain. The complexity of ESPCs accounts for much of this uncertainty. ESPCs are complicated because of the wide array of technical, financial, legal, and energy-related issues that must be resolved both in the short and long-term. Because of this complexity and the cost of more extensive reliance on actual measurements, agencies have tended in the past to defer to the expertise of energy services companies and the use of stipulation in lieu of measurements. In doing so, they may have paid contractors for energy savings that did not occur or may have negotiated contracts that are more expensive than necessary. Limited agency audits and our interviews have disclosed indications of these problems in dozens of projects. Since most agencies have not audited their use of ESPCs and broad performance information and documentation are unavailable, we could not determine how widespread these problems are. Without comprehensive information on actual performance of the contracts once they have begun to unfold, however, the agencies’ task of overseeing the contracts becomes difficult. In turn, the lack of comprehensive information on ESPC performance makes it more difficult for the Congress to determine the level of support it should lend to agency use of the financing mechanism. Finally, because DOE reports to the Congress about agencies’ progress toward achieving energy goals, the lack of comprehensive data on the results of ESPCs also reduces congressional awareness in this area. In a more general context, additional information would be useful in comparing the costs and benefits of ESPCs relative to alternative financing mechanisms. This information could include, among other things, the effects of deterioration of energy efficiency savings in the absence of measurement and verification and delays in obtaining up-front appropriations relative to obtaining funds through ESPCs. In response to these problems, agencies have begun to recognize the importance of developing and using their own expertise more effectively, but this has occurred only recently and, at this point, they have not ensured that it is brought to bear during negotiations and in the longer term. The ability to correct these problems requires the availability of high-quality information and the expertise to use it effectively during negotiations and throughout the life of these long-term contracts. In developing and using appropriate expertise and information, agencies can also begin to assemble better information about governmentwide experiences with ESPCs, including ways of improving such areas as measurement and verification. They can also draw conclusions regarding the effectiveness of agencies’ working relationships with individual energy services companies, which could provide another valuable tool for agencies to consider. Finally, as ESPC use continues, sharing best practices or lessons learned in all of these areas would go a long way toward making ESPCs as cost effective as possible while also helping to ensure that the federal government’s financial interests are protected. Absent further efforts to rely on appropriate expertise and improve the quality of information, agencies will continue to be at a disadvantage in negotiating effective ESPCs and less likely to achieve long-term energy and financial savings. Agencies have expressed concerns about the adequacy of competition among financiers and energy services companies in developing ESPCs and consequently their ability protect their interests. Agency officials and others expressed concerns that financing costs may be too high because there may be too few companies that finance ESPCs and energy services companies may not seek the most favorable financing. Other problems such as the length of time between competitions for the approved list of energy services companies and the lack of price competition inherent in using the super ESPCs also reinforce these concerns. Agency officials have taken some steps to address these concerns, but the question of sufficient competition points toward the need for further measures such as requiring greater competition among financial service companies to potentially reduce interest rates and putting the super ESPCs out for competition more frequently. Differing agency interpretations of the law establishing ESPCs have contributed to agency uncertainties about the use of funding sources other than savings for reducing investments in ESPCs through upfront payments. Within DOE, inconsistencies and uncertainties about interpretation of the statute are apparent. In practice, some agencies believe that contract payments may be made only from utility savings resulting from the ESPC while other agencies make a lump-sum payment on the contract—from funds already earmarked for equipment replacement or from other sources—to reduce the length of the contract and finance charges. In our view, these inconsistencies reflect a lack of clarity about the use of down payments in general and what does—or does not—constitute a legitimate source of funds for such down payments if they are allowed. To ensure that agencies use ESPCs as the Congress intends, we recommend that the Congress consider revising the relevant statute to more clearly define the components of costs that must be covered by savings. In particular, the Congress could clarify whether agencies may make lump sum payments using funds other than their current year utility savings. To better ensure that federal agencies undertake only those ESPCs having the greatest likelihood that savings will cover costs and that the agencies negotiate the best possible contract terms and monitor the contracts properly, we are making recommendations to the heads of those agencies included in our review, namely the Secretaries of Defense, Energy, and Veterans Affairs, the Attorney General of the Department of Justice, and the Administrator of the General Services Administration. Our recommendations focus on the areas of information, expertise, and audits: Collect and use ESPC-related data more effectively by (1) compiling information on key contract terms—such as interest rates and markups for energy-efficiency equipment—for each ESPC and as a key part of best practices make information accessible to agency officials in negotiating subsequent ESPCs and (2) tracking actual costs, verified savings, and any changes to ESPC projects that may affect these costs and savings. Ensure that the agency officials responsible for ESPC decision-making use appropriate expertise when they undertake an ESPC. If the officials do not have sufficient expertise themselves, they should be required to obtain it from such independent sources as a centralized pool within the agency; the contracting centers of the Air Force, the U.S. Army Corps of Engineers, the Navy, and FEMP; or from private parties. The costs of acquiring this expertise should be considered in deciding whether to use an ESPC. Require, as appropriate and in line with available resources, that inspectors general or other audit offices conduct audits of ESPC projects to ensure the projects are achieving their expected results. Because the contracting centers can play an important role in helping the agencies develop and monitor their ESPCs, we recommend that the secretaries of Defense and Energy require the contracting centers to work with the agencies that use them to ensure that the contracting centers have the information and expertise needed to effectively develop and monitor their ESPCs; and continue and expand their ongoing efforts regarding competition, including taking steps such as re-competing the super ESPCs as soon as possible and then more regularly. Finally, to strengthen the information available to the Congress for assessing the progress and effectiveness of ESPCs, we recommend that the Secretary of Energy collect more extensive information on agencies’ ESPCs, including such critical elements as cumulative verified savings and costs, and include that information in its annual report to the Congress. As a part of this effort, we also recommend that the Secretary compare projects funded by ESPCs with projects funded by upfront appropriations to determine their relative costs and benefits. Specifically, the Secretary should determine, among other things, the effects of deterioration of energy efficiency savings in the absence of measurement and verification and delays in obtaining upfront appropriations relative to obtaining funds through ESPCs. We provided the Departments of Defense, Energy, Justice, and Veterans Affairs, and the General Services Administration, with a draft of this report for their review and comment. DOD, DOE, VA, and GSA provided written comments, which are presented in appendixes II through V. The Department of Justice responded by email on June 2, 2005. All of the agencies generally concurred with the findings, conclusions, and recommendations and stated their intention of implementing the recommendations. The agencies also submitted technical and clarifying comments, which we have incorporated as appropriate. In addition, DOE expressed concerns in two areas. First, regarding our discussion about confusion over the allowable sources of funding for ESPCs, DOE expressed the view that its General Counsel’s office’s opinion regarding prepayments was not at variance with FEMP guidance as we reported. Nevertheless, the agency noted that it will take steps to ensure that FEMP guidance is consistent on this point to avoid future confusion. Furthermore, DOE supports our recommendation that the Congress more clearly define the components of ESPC costs that must be covered by savings and the agency stated that it will address the issue in a report to the Congress on ESPCs that is currently in the review and approval process within the agency. We have added language to the report noting DOE’s disagreement with our discussion of this issue. Second, DOE expressed concern that FEMP does not have authority to do more to facilitate oversight of ESPCs, as we recommended. While we recognize DOE’s concern with taking on additional oversight responsibilities, we note that, in commenting on our draft report, all of the agencies stated their intention to work cooperatively with DOE and the other agencies to implement our recommendations. In recommending that DOE facilitate oversight of ESPCs, we intended that the agency take such actions as collecting data on verified savings and costs and reporting such information to the Congress, as well as to the agencies themselves, to aid the Congress and the agencies in their ESPC oversight actions. We believe that it is appropriate at this point for DOE and the other agencies to continue to use a cooperative approach, such as through the Federal ESPC Steering Committee, to develop and implement consistent and best practices for ESPCs. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the date of this letter. At that time, we will send copies of this report to the appropriate congressional committees; the Secretaries of Defense, Energy, and Veterans Affairs; the Attorney General; the Administrator, GSA; and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions, please call me at (202) 512-3841. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributors to this report are listed in appendix VI. We were asked to determine (1) the extent to which federal agencies used ESPCs; (2) what energy savings, financial savings, and other benefits agencies expect to achieve; (3) the extent to which actual financial savings from ESPCs cover costs; and (4) what areas, if any, require steps to protect the government’s financial interests in using ESPCs. To satisfy these objectives, we included in our review any ESPCs that agencies undertook in fiscal years 1999 through 2003. We did not perform formal cost benefit analyses of individual ESPC projects or of ESPCs as a whole because of data limitations. To address the data analysis component of these objectives, we first obtained basic contract data from the four federal contracting centers that assist agencies with ESPCs-the Air Force Civil Engineer Support Agency, the U.S. Army Corps of Engineers’ Huntsville Center, the Naval Facilities Engineering Service Center, and (FEMP), which reflect the majority of all ESPCs undertaken during fiscal years 1999 through 2003. We did not completely assess these data for reliability; however, we reviewed the steps that the contracting centers take to ensure data reliability and determined that these steps were sufficient for our reporting purposes. We obtained more detailed contract data for the same period from the seven federal agencies in our review having the most facility floor space and the highest energy use and therefore the highest potential to use ESPCs. These agencies were the Departments of the Air Force, the Army, the Navy (including the Marine Corps), Energy, Justice, Veterans Affairs, and the General Services Administration. Before analyzing the contract data, we combined the data from the contracting centers and the agencies into a single data set. Because some agency contract data could also be included in the contracting center data, we identified the projects that appeared to have duplicate records. We asked each agency to confirm those records that were duplicates and, using our best judgment, retained those records with the most complete information. To address the objectives overall, we interviewed and obtained documentation from a wide range of stakeholders. From the seven agencies and four contracting centers, we talked with officials at headquarters, in regions, and at specific project sites. We also discussed the issues with officials from the Congressional Research Service; Oak Ridge National Laboratory; Lawrence Berkeley National Laboratory; the Defense Energy Support Service Center; and the states of Maryland and Louisiana, both of which use ESPCs extensively. In addition, we talked with officials from the energy services and financial services sectors and an academic expert knowledgeable about ESPCs. We also reviewed relevant legislation, regulations, policies, and agency procedures. We also reviewed studies by the Oak Ridge National Laboratory and the Lawrence Berkeley National Laboratory that analyzed the costs and benefits of ESPCs and compared net benefits of using ESPCs to finance energy savings improvements with the net benefits of using direct appropriations. To evaluate these studies, we interviewed some of the authors and reviewed other studies and reports that the authors had referred to and which supported some of the assumptions they used to model the net benefits. In particular, we asked the authors of the Lawrence Berkeley National Laboratory study about their support for the assumption that energy savings decay over time in the absence of monitoring and verification. They referred us to a body of literature on energy commissioning—essentially energy audits of buildings—in which there is evidence of energy savings decay. We reviewed several studies from this literature and concluded that there was sufficient evidence for savings decay to warrant inclusion of the Lawrence Berkeley National Laboratory study results, with the proper caveat that we could not definitively conclude on the extent of savings decay or on the extent to which decreased savings decay and other benefits from ESPC-funded projects may offset the significant savings achieved from using upfront funding that we found previously in six case studies. To better understand specific benefits, problems and suggested improvements for ESPCs as well as evaluate whether savings were covering costs, we interviewed and obtained appropriate documentation from officials who either negotiated and/or currently manage specific ESPC projects at 15 geographically dispersed locations. We judgmentally selected these officials from lists of knowledgeable officials provided to us by the ESPC program managers of each agency. We also discussed these issues in general for an additional 10 projects with the officials who manage DOE’s departmental ESPCs. Furthermore, we reviewed 13 audit reports conducted by the Army Audit Agency or Air Force Audit Agency and discussed the results with auditors involved in the reviews; reviewed a report by a consultant hired by VA to assess that agency’s use of ESPCs; reviewed relevant GAO reports and consulted with subject matter experts at GAO; and reviewed other reports and information on ESPCs identified by searching the literature. We conducted our work from January 2004 through May 2005 in accordance with generally accepted government auditing standards. In addition to the individual named above, Dan Haas, Dennis Carroll, Randy Jones, Hugh Paquette, Frank Rusco, Karla Springer, Barbara Timmerman, and Jena Whitley made key contributions to this report. Chris Bonham, Carol Henn, Cynthia Norris, and Jena Sinkfield also contributed. | The federal government is the nation's largest energy consumer, spending, by latest accounting, $3.7 billion on energy for its 500,000 facilities. Upfront funding for energy-efficiency improvements has been difficult to obtain because of budget constraints and competing agency missions. The Congress in 1986 authorized agencies to use Energy Savings Performance Contracts (ESPCs) to privately finance these improvements. The law requires that annual payments for ESPCs not exceed the annual savings generated by the improvements. GAO was asked to identify (1) the extent to which agencies used ESPCs; (2) what energy savings, financial savings, and other benefits agencies expect to achieve; (3) the extent to which actual financial savings cover costs; and (4) what areas, if any, require steps to protect the government's financial interests in using ESPCs. Although comprehensive data on federal agencies' use of ESPCs are not available, in fiscal years 1999 through 2003, we found that 20 federal agencies undertook 254 ESPCs to finance investments in energy-saving improvements for 5 to 25 years. Through the ESPCs, federal agencies plan to make annual payments amounting to at least $2.5 billion spread over the lifetime of the contracts. Agencies expect to achieve benefits that include energy savings worth at least $2.5 billion over the life of the contracts, as well as other benefits that cannot be easily quantified, such as improved reliability of the newer equipment over the aging equipment it replaced, environmental improvements, and additional energy and financial savings once the contracts have been paid for. While these benefits could be achieved using upfront funds and with lower financing costs, agencies stated that they generally have not received sufficient funds upfront for doing so and see ESPCs as a necessary supplement to upfront funding in order to achieve the benefits cited. Agencies believe that ESPCs also provide unique benefits such as a partial shift of risk from agencies to private energy services companies and a more integrated approach to providing efficiency measures. Agencies structure ESPCs so that financial savings cover costs and they reported that many do. However, GAO could not verify that conclusion using the data on ESPCs, and GAO work and agency audits disclosed ESPCs in which unfavorable contract terms, missing documentation, and other problems caused GAO to question how consistently savings cover costs. Furthermore, differing interpretations of the law establishing ESPCs about what components of costs must be paid for from the savings generated by the project or may be paid for using other funding sources have contributed to uncertainties about whether savings are appropriately covering costs. GAO identified concerns in the areas of expertise and related information and competition that are fundamental to ensuring that savings cover costs and to protecting the government's financial interests in using ESPCs. According to agency officials, they often lacked the technical and contracting expertise and information (such as interest rates and markups) to negotiate ESPCs and to monitor contract performance in the long term. The officials also think there may be insufficient competition among finance and energy services companies and that this could lead to higher costs for ESPCs. |
The judiciary’s system of courts consists of the Supreme Court, 12 regional circuit courts of appeals, 94 district courts, and 91 bankruptcy courts, as well as courts of special jurisdiction and including the Court of Appeals for the Federal Circuit, the Court of International Trade, and the Court of Federal Claims. The judiciary also includes the following agencies: Administrative Office of the U.S. Courts (AOUSC): provides a variety of support services to U.S. courts, including administrative, technological and legal services. Federal Judicial Center (FJC): an independent agency within the judiciary that provides research and evaluation of judicial operation and provides education and training. U.S. Sentencing Commission (USSC): an independent agency within the judiciary that provides education and training on sentencing, and promulgates sentencing policies, practices, and guidelines for the federal criminal justice system. From fiscal years 2003 through 2014, judiciary obligations for travel ranged from $45.3 million to $82.8 million, or 0.72 to 1.16 percent of total judiciary obligations. Table 1 shows total and travel obligations for the judiciary from fiscal years 2003 through 2014. During the fiscal year 2013 sequestration, judiciary obligations for travel decreased from $66.3 million, which had been the obligation in fiscal year 2012, to $50.2 million (a decrease of approximately 24 percent). This decrease in the travel obligations is consistent with an overall decrease in the total judiciary obligations during the fiscal year 2013 sequestration (from $7.3 billion in fiscal year 2012 to $6.9 billion in fiscal year 2013). Judiciary travel regulations require judges to report their NCR travel. Each judge must prepare and file a report disclosing the NCR travel undertaken by the judge during the previous calendar year utilizing the Judges’ Non-Case-Related Travel Reporting System. This system, which is administered by AOUSC, requires judges to report the details about their NCR travel, specifically dates of travel, total expense or cost of travel, name of sponsor of travel or funder, sources of funds expended for travel, and purpose of NCR travel. Judges’ NCR travel, in some instances, may be paid for by entities other than the judiciary; for example, agencies within the executive branch or private organizations to support extrajudicial activities. According to AOUSC officials, the judiciary implemented reporting requirements for judges’ NCR travel in response to periodic congressional requests AOUSC received related to judges’ NCR travel. As directed by the Judicial Conference of the United States, AOUSC amended its travel regulations for judges in 1999 to include a requirement for judges to annually report instances of NCR travel. According to AOUSC officials, since the inception of the Judges’ Non-Case-Related Travel Reporting System in 1999, NCR travel data have been used on four occasions as the source of information to respond to specific inquiries from Members of Congress about certain conferences or specific courts. Courts and judicial agencies conduct conferences provided for by statute. The Judicial Conference of the United States is statutorily required to meet annually, but may meet as many times as the Chief Justice of the United States may designate. Circuit judicial conferences are authorized to meet annually or biennially as determined by the chief judge of each circuit. The purpose of a circuit judicial conference is to consider the business of courts and administration of justice within that circuit. FJC conducts conferences for continuing education and training for personnel of the judicial branch as authorized by statute. USSC conducts conferences, including seminars and workshops related to federal sentencing guidelines and training programs related to sentencing education for judicial and probation personnel. Unlike executive branch agencies, courts and judicial agencies are generally not subject to executive branch regulations, memorandums, or circulars that may apply to conference spending. However, AOUSC promulgates conference planning and administration policies for the judiciary, through the direction of the Judicial Conference of the United States, based in part on executive branch policies. The Judicial Conference of the United States issued changes to its conference policies in 2012 similar to policies recently enacted by the executive branch to implement cost savings. The new policy is generally patterned after policies and practices adopted by executive branch agencies, including a memorandum issued by the Office of Management and Budget directing executive branch agencies to report on conference spending and reduce travel expenditures. Judiciary conference planning and administration policy also states that each judicial entity must issue annually a publically available report on each conference costing over $100,000, including information such as the number of attendees paid for by the judiciary and cost of the conference to the government or judiciary. Judiciary conference planning and administration policy also states that conference planners must consider minimizing meeting-related costs and adopt procedures to ensure the standards set forth in this policy are met. The policy includes the following specific guidance considerations and requirements: Cost considerations: suggested consideration of various cost-saving practices such as securing the lowest conferee travel, lodging, meeting room, administrative, and technology costs. Management considerations and internal controls: required documentation of basic internal controls of management oversight and approval of conference planning, and suggested additional management considerations. Conference site comparisons: required conference planners to perform cost comparisons of at least two potential conference sites and maintain written documentation of their rationale for site selection. Annual NCR travel costs averaged $8.8 million per year, with a range of $7.2 million to $10.2 million for fiscal years 2003 through 2014. We also found the annual NCR travel costs reported specifically for judiciary- associated conferences averaged $5.9 million, or 67 percent of average total annual NCR travel costs, with a range of $4.9 million to $6.8 million for fiscal years 2003 through 2014. On the basis of our review of the NCR travel data and statements made by AOUSC officials, we found NCR travel data were sufficiently reliable to report on the average and range of judges’ reported NCR travel costs across fiscal years 2003 through 2014. However, because of limitations we identified in the data in the NCR travel-reporting system, we were not able to determine the extent to which those reported costs were paid using judiciary funds rather than other federal or private sources as discussed below. While AOUSC tracks the costs of all official travel paid for by the judiciary in its accounting system of record, AOUSC’s NCR travel-reporting system does not collect judges’ information in a way that enables it to determine the costs to the judiciary rather than to private entities and other federal agencies. Judiciary Travel Regulations for United States Justices and Judges specify annual NCR reporting requirements for judges and justices, which include reporting the name of funder and type of funds to support each instance of NCR travel. We found that data fields for entering information in the Judges’ Non-Case-Related Travel Reporting System about the name of funder of NCR travel lacked controls to standardize responses. Specifically, when users entered data on the name of the funder of NCR travel, they did not consistently record whether NCR travel was paid for by a court or judicial agency versus other federal agencies or private entities. In addition, when reporting/entering information about the type of funds used for NCR travel, users could not record whether NCR travel was paid for using judiciary funds versus other funding sources because the system requires the user to choose from the following three options to identify the type of funds used to pay for judges’ NCR travel: federal, mixed, and private. Since the federal category could encompass judiciary, legislative, and executive branch entities, this data field does not allow AOUSC to readily identify or report the NCR travel costs paid using judiciary funds. Standards for Internal Control in the Federal Government states that internal controls are integral for effective information technology management to ensure useful, reliable, and continuous recording and communication of information. Such controls may include systems controls that standardize data entry so the data are useful for reporting purposes. For NCR travel data, AOUSC could improve the Judges’ Non- Case-Related Travel Reporting System to allow collection of cost information directly attributable to the judiciary. By implementing controls, for example, to standardize responses regarding the name of the funder, AOUSC could more readily determine which entity, including judiciary entities, paid for an instance of NCR travel. In addition, by revising the categories for type of funds to account for judiciary funds, AOUSC could more easily identify instances of NCR travel that are being paid for with judiciary funds. According to AOUSC officials, as of November 2015, AOUSC has not decided to change the way the Judges’ Non-Case-Related Travel Reporting System collects judges’ NCR travel information, but is considering making improvements to the system to better collect judges’ NCR travel information, including collecting the judiciary’s costs of judges’ NCR travel. AOUSC officials also stated that any specific options it may develop for changing the Judges’ Non-Case-Related Travel Reporting System will be submitted to the Judicial Conference of the United States for consideration. According to the 2015 Strategic Plan for the Federal Judiciary, Issue 6, the Judiciary’s Relationships with the Other Branches of Government, the judiciary must provide Congress timely and accurate information about issues affecting the administration of justice and demonstrate that the judiciary has a comprehensive system of oversight and review. By improving its travel-reporting system, AOUSC officials would be able to better collect required NCR travel information from judges and identify and report the costs to the judiciary of judges’ NCR travel in response to congressional member requests. In accordance with judiciary policy on conference planning and administration, AOUSC issued publically available reports on conferences spending across all courts and judicial agencies for fiscal years 2013 and 2014 for conferences costing over $100,000. These reports indicated that the judiciary spent $11.5 million on 61 conferences costing over $100,000 in fiscal years 2013 and 2014. Specifically, the judiciary spent $4.6 million in fiscal year 2013 and $6.9 million in fiscal year 2014 for these conferences. For more information about the conferences listed in the fiscal year 2013 and 2104 conference reports, see appendix I. According to AOUSC officials, they are taking steps to improve their procedures for developing the publically available annual report on judiciary conferences costing over $100,000. AOUSC initially published the fiscal years 2013 and 2014 conference reports in October 2014 and April 2015, respectively. AOUSC subsequently revised both reports in September 2015 in response to errors it discovered in the original reports. The initial conference reports published by AOUSC contained errors in their required reporting information for number of attendees paid for by the judiciary, cost of the conference to the government or judiciary, and number of reportable conferences. For example, the initial reports published by AOUSC for fiscal years 2013 and 2014 conferences did not include approximately $300,000 in conference costs paid for by the judiciary. AOUSC officials told us these costs were omitted because of errors in how AOUSC utilized financial databases for purposes of reporting its annual conference costs, and how judiciary employees tracked conference expenses, including the number of attendees for inclusion in its publically available conference reports. According to AOUSC officials, it has developed and implemented new procedures for how to correctly utilize financial databases for the development of the fiscal year 2015 publically available conference report. Additionally, officials said for fiscal year 2016, they will issue revised guidance to conference planners within the judiciary for how to correctly enter pertinent data to better track the number of attendees and conference costs for the annual reports. We sampled 8 conferences held in fiscal years 2013 and 2014 costing over $100,000. The total cost from judiciary appropriated funds for these conferences ranged from $182,733 to $305,607. For more information about the 8 conferences we sampled, see table 2. Our results cannot be generalized to all conferences costing over $100,000 conducted by the judiciary. However, our analysis provides insights into the judiciary’s compliance with its conference policies. The 8 conferences we sampled followed judiciary policy guidance for conference planning and administration including cost considerations, management considerations and internal controls, and site selection. Judiciary policy guidance states, among other things, that each organization must adopt internal controls and procedures to ensure the standards set forth in this policy are met. These judiciary internal control standards require documentation of management oversight, but other management factors are suggested to be considered by planners, as appropriate for their conference. The 8 conferences we sampled complied with judiciary policy for minimizing the cost of meetings. Judiciary policy for conference planning and administration states that in planning meetings, consideration must be given to minimizing the meeting costs incurred by the government, and provides 10 examples of such cost considerations. Officials who planned the 8 conferences we sampled provided examples of how they considered these various cost considerations and employed various strategies to reduce administrative, conferee travel, lodging, meeting room, and technology costs. For example: Attendees’ common carrier expenses: The sponsors of the 2014 USSC Seminar on Sentencing Guidelines kept common carrier costs down by identifying the venue that had the greatest number of potential attendees within driving distance. Cost of hotel rooms and meeting room costs: Officials who planned the Fourth, Eighth, and Eleventh Circuit Judicial Conferences and 2013 FJC National Workshop for Bankruptcy Judges negotiated hotel and conference price concessions to achieve hotel and meeting room discounts. Planners of the Second Circuit Judicial Conference negotiated with the prospective hotel to not be charged for meeting rooms because their room block filled the hotel. Printing costs: Officials who planned the Second, Eighth, and Eleventh Circuit Judicial Conferences reduced printing costs through a variety of initiatives, such as printing materials themselves, utilizing the Government Publishing Office, and using web-based material or flash drives for registration and conference materials. USSC and FJC inaugurated a smart phone application that allowed attendees to find conference information on their phones rather than relying on hard copy handouts and used on-line applications, syllabi, course materials, and agendas. Planners for the 2014 Sex Offender Supervision Management Conference communicated all notifications and event information through the Internet — there were no mailing costs. Conference planners for all 8 conferences we sampled complied with judiciary requirements for internal controls and management oversight, including consideration of, as appropriate, 15 specific strategies suggested by judiciary policy. Some of these strategies overlap with the requirements for cost and site selection considerations, described previously and in the rest of this section. Judiciary guidance for conferences requires management oversight of conference planning but suggested other management factors be considered as appropriate. In addition, meetings costing over $100,000 also are required to be approved by agency leadership in advance. Officials who planned the 8 conferences provided us the required documentation of management oversight and approval by the appropriate senior managers. For example: Planners for the Second, Fourth, Eighth, and Eleventh Circuit Judicial Conferences, as well as the Sex Offender Supervision Management Conference, provided documentation, largely in the form of e-mails generated by the Meeting and Event Planning and Reporting Tool, showing review and approval by the AOUSC Director and Deputy Director. For both of the FJC conferences, the National Workshop for Bankruptcy Judges, and the National Educational Conference for Bankruptcy Court Employees, officials provided documentation of review and signed approval by senior managers and the FJC Director. Officials for 2014 USSC National Seminar on the Federal Sentencing Guidelines reported that they consulted with the Chair of the commission at all stages of the seminar planning and provided supporting documentation. Both judiciary guidance and written responses from all 8 conference planners underscored the importance of site comparisons when planning a conference. On the basis of our review, conference planners for the 8 conferences we sampled performed cost comparisons of at least two potential conference sites and provided documentation of the alternative sites considered and the rationale used for selecting the conference site as required. For example: 2013 Fourth Circuit Judicial Conference: Officials compared site and hotel options for their judicial conference by calculating the costs of transportation and lodging across different cities located within their region to determine which site had the lowest overall travel and hotel costs for judiciary attendees. 2014 Sex Offender Management Conference: Officials chose a centralized meeting location within driving distance of a majority of the attendees, thus reducing the travel costs for federal probation officers attending the conference. 2014 USSC National Seminar on the Federal Sentencing Guidelines: USSC officials minimized travel costs by tabulating the number of probation officers within 2.5 hours’ driving distance of several prospective venues. They held the conference in Philadelphia after determining it was more convenient than other cities for the greatest number of probation officers. FJC conferences: Planners provided a written summary of their site and hotel selection rationale and documented that the program leader had confirmed that the selection process was in compliance with judiciary policy. Throughout the year, judges engage in NCR travel for judicial administration conferences and training that are not directly associated with adjudicating cases. Collecting information on this category of travel spending is important to the judiciary for maintaining the quality of information required under its policies and for congressional oversight. Specifically, taking steps to improve AOUSC’s travel cost collection system would help the judiciary collect and readily identify the costs of judges’ NCR travel that are paid by the judiciary. Strengthening the system would also better enable the judiciary respond to congressional requests for such information. To better report information to Members of Congress on judiciary NCR travel costs, the Director of AOUSC should improve its data collection system to collect and identify NCR travel costs paid by the judiciary. We provided a draft report to AOUSC, the Federal Judicial Center, and the U.S. Sentencing Commission for review and comment. AOUSC concurred with our recommendation and provided written comments, which are printed in full in appendix II. These agencies also provided technical comments that we incorporated as appropriate. In its comment letter, AOUSC stated it agreed with our recommendation that it should enhance its reporting mechanisms to better distinguish certain travel that is funded by the judiciary from that funded by other government agencies. It stated that the judiciary has already adopted the change we recommended and is planning to add a new functionality in the judges’ non-case-related travel reporting tool. AOUSC also stated that it was pleased that GAO has found that the judiciary has followed applicable policies and procedures with regard to the travel studied in this report. We are sending copies of this report to the Judicial Conference of the United States, the Directors of the Administrative Office of the U.S. Courts and Federal Judicial Center, Staff Director of the U.S. Sentencing Commission, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions, please contact me at (202) 512- 8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made significant contributions to this report are listed in appendix II. National Workshop for Magistrate Judges I 2013 Federal Judicial Center National Educational Conference for Bankruptcy Court Clerks, Bankruptcy Administrators, Bankruptcy Appellate Panel Clerks, and Bankruptcy Court Chief Deputy Clerks 2013 Federal Judicial Center National Educational Conference for District Court Clerks, District Court Executives, and District Court Chief Deputy Clerks Experienced Supervisors Seminar: Targeting Leadership Excellence Leadership Development Program Concluding Workshop (Class XI) In addition to the contact named above, Glenn Davis (Assistant Director), Daniel Rodriguez, Carl Potenzieri, Jennifer Bryant, Kathleen Donovan, Susan Hsu, Tracey King, Amanda Miller, and Janet Temko-Blinder made key contributions to this report. | The federal judiciary consists of a system of courts that has the critical responsibility of ensuring the fair and swift administration of justice in the United States. Employees and judges within the judiciary travel for a variety purposes, including attending conferences for training and conducting judicial administration. For judges, travel not directly related to adjudicating cases has been termed NCR travel. GAO was asked to review the judiciary's costs of judges' NCR travel and conferences. This report examines the following: (1) What has been the cost of judges' NCR travel from fiscal years 2003 through 2014 and to what extent does the judiciary collect information on its costs for judges' NCR travel? (2) How much did the judiciary spend on all conferences over $100,000 for its employees in fiscal years 2013 and 2014, and to what extent did selected conferences conform to judiciary policy on conferences? GAO analyzed judges' NCR travel data from fiscal years 2003 through 2014, reviewed procedures for collection of NCR travel information, and interviewed judiciary officials. GAO reviewed judiciary policy for conference planning and administration, information from a non-generalizable sample of eight conferences, and interviewed judicial officials responsible for planning the conferences. From fiscal years 2003 through 2014, judges have used a separate system to report their non-case-related (NCR) travel costs paid for by government and private sources. These NCR travel costs averaged $8.8 million per year. However, while the Administrative Office of the U.S. Courts (AOUSC) tracks the costs of all official travel in its accounting systems of record, the NCR system does not collect specific information on the direct costs to the federal judiciary for judges' NCR travel. GAO found that AOUSC's data collection system for judges' NCR travel information lacked controls to standardize responses to accurately record whether NCR travel was paid for by a court or judicial agency versus other federal agencies or private entities. As a result of these limitations in the NCR travel data, GAO was not able to determine the extent to which those reported costs were paid using judiciary funds rather than other federal or private sources. According to AOUSC officials, as of November 2015, AOUSC has not decided to change the way the Judges' Non-Case-Related Travel Reporting System collects judges' NCR travel information, but is considering making improvements to the system to better collect judges' NCR travel information, including collecting the judiciary's costs of judges' NCR travel. According to the 2015 Strategic Plan for the Federal Judiciary, the judiciary must provide Congress timely and accurate information about issues affecting the administration of justice. By improving the system, AOUSC officials would be able to better collect required NCR travel information from judges and identify and report the judiciary's costs for judges' NCR travel in response to future congressional member requests. The judiciary spent $11.5 million on 61 conferences costing over $100,000 in fiscal years 2013 and 2014. AOUSC began collecting information on judiciary conference spending across all courts and judicial agencies in fiscal year 2013 for conferences costing over $100,000. This information was used to develop publically available reports and indicated the judiciary spent $4.6 million in fiscal year 2013 and $6.9 million in fiscal year 2014 for conferences costing over $100,000. The judiciary followed its policies for conference planning and administration. GAO sampled 8 conferences from the 61 conferences held in fiscal years 2013 and 2014 costing over $100,000 and determined the extent to which those conferences conformed to judiciary policy on conference planning and administration. GAO's results cannot be generalized to all conferences costing over $100,000 conducted by the judiciary, but do provide insight into the judiciary's compliance with its conference policies. Conference planners for the 8 conferences GAO sampled followed judiciary policy for conference planning and administration including (1) cost considerations— suggested strategies to reduce administrative, conferee travel, lodging, meeting room, and technology costs— (2) management considerations and internal controls: judiciary requirements for internal controls and management oversight of conference planning and implementation— and (3) conference site selection— a requirement to perform cost comparisons of at least two potential conference sites and document alternative sites considered and the rationale used for selecting the conference site. GAO recommends AOUSC improve its data collection system to collect and identify judges' NCR travel costs paid by the judiciary. AOUSC agreed with our recommendation. |
Mr. Chairman and Members of the Subcommittee: I am pleased to be here today to discuss the results of our review of the federal judiciary’s assessment of its bankruptcy judgeship needs in the 1993, 1995, and 1997 assessment cycles. Limiting judgeship requests to the number necessary is important because each bankruptcy judgeship costs about $721,000 to establish and about $575,000 per year to maintain. At the same time, it is important that there be sufficient bankruptcy judgeships to enable the bankruptcy courts to adjudicate bankruptcy cases fairly and efficiently. Specifically, my testimony focuses on three principal issues: (1) the process, policies, and workload standards that the Judicial Conference of the United States used to assess the bankruptcy districts’ requests for additional bankruptcy judgeships; (2) how the Judiciary applied its policies and workload standards across the districts that requested bankruptcy judgeships; and (3) the extent of noncase-related travel in 1995 and 1996 by bankruptcy judges in the 14 districts for which the Judicial Conference of the United States has requested bankruptcy judgeships in 1997. In brief, we found that the Bankruptcy Committee and the Judicial Conference generally followed the Judicial Conference’s process and policies and consistently applied the Conference’s statistical workload standards in assessing individual district’s requests for additional judgeships in 1993, 1995, and 1997. For example, the Bankruptcy Committee and Judicial Conference placed heaviest emphasis on whether the districts requesting additional judgeships had a caseload that exceeded 1,500 weighted filings per existing authorized judgeship. Neither the Committee nor the Conference approved any request for additional judgeships from districts that did not meet this minimum standard. According to officials at the Administrative Office of the U.S. Courts (AOUSC), neither the Committee nor the Judicial Conference keeps written documentation on how other available data, such as case management practices or a district’s geography (travel distances between places of holding court), were used in assessing districts’ judgeship requests. AOUSC officials also stated that the use of data other than weighted case filings in assessing judgeship needs is inherently judgmental. The amount of time judges use for noncase-related travel—travel that is not related to adjudicating specific cases—could potentially affect the amount of time judges have to devote to individual cases. In assessing a bankruptcy judge’s workload, the Judicial Conference assumes that a bankruptcy judge will spend, on average, about 30 percent of his or her time—about 600 hours, or 75 work days per year—on noncase-related matters, such as travel, training, administrative affairs, and general case management activities that cannot be attributed to a specific case. We received information on noncase-related travel from 80 of the 84 authorized judges in the 15 districts that would receive or share 1 of the judgeships requested in 1997. These 80 judges reported a total of 416 noncase-related trips in 1995 and 403 in 1996. On the basis of the information reported, we calculated that overall these judges each used an average of 12.5 work days for noncase-related travel in each of these years. About 98 percent of these trips were made to destinations within the United States. Together, circuit or district meetings and activities; Judicial Conference meetings and activities; and workshops, seminars, and other activities sponsored by AOUSC or the Federal Judicial Center (FJC), accounted for about 66 percent of all noncase-related trips and about 74 percent of all noncase-related travel workdays in 1995. Comparable figures for 1996 were about 67 percent and about 73 percent, respectively. In correspondence to the Subcommittee Chairman on August 8, 1997, we provided more details about these trips for each district. Through AOUSC, we also surveyed the 13 authorized judges in the 4 districts with weighted filings of 1,500 or more during the 1997 assessment cycle that did not request judgeships. The 12 judges in these four districts (one position was vacant) reported a total of 177 noncase-related trips—75 in calendar year 1995 and 102 in calendar year 1996. Based on these reported data, we calculated that the 12 judges spent a total of 178 workdays in 1995 and 258 workdays in 1996 on noncase-related travel. This is a per judge average of 14.8 workdays in 1995 and 21.5 workdays in 1996. Overall, about 23 percent of all trips in these two years were sponsored and paid for by organizations other than the federal judiciary. To develop the information in this statement, we obtained documentation from AOUSC on (1) the process, policies, and workload standards the Judicial Conference has established for assessing the need for bankruptcy judgeships; (2) how the process, policies, and workload standards were used in the 1993, 1995, and 1997 assessment cycles to determine the number of additional bankruptcy judgeships needed and requested; and (3) the temporary assistance requested by and provided to the districts that sought additional judgeships in 1993, 1995, and/or 1997. Through AOUSC, we surveyed the 84 judges in the 15 districts that would receive or share one of the bankruptcy judgeships the Judicial Conference requested in 1997 to obtain information on the judges’ noncase-related travel in calendar years 1995 and 1996. Through AOUSC, we also surveyed the 13 judges in the 4 districts with weighted filings of 1,500 or more in the 1997 assessment cycle that did not request additional judgeships to obtain data on their noncase-related travel in calendar years 1995 and 1996. We did our work between March and August 1997 in Washington, D.C., and Dallas, TX, in accordance with generally accepted government auditing standards. Details of our scope and methodology are presented in appendix I. Bankruptcy cases in the United States are filed in 1 of the 90 federal bankruptcy courts. The Judicial Conference is statutorily required to periodically submit to Congress recommendations for new federal bankruptcy judgeships. Congress last authorized new bankruptcy judgeships in 1992. Subsequently, the Conference has sent recommendations for additional bankruptcy judgeships to Congress in 1993, 1995, and 1997. Congress considered, but not approve, any new judgeships from the 1993 and 1995 requests and is currently considering the 1997 request. To assist the Conference in advising Congress on the need for additional judgeships, the Conference’s Committee on Administration of the Bankruptcy System (Bankruptcy Committee) is to conduct periodic national judgeship surveys to evaluate requests for additional bankruptcy judgeships. In 1993, 1995, and 1997, the Bankruptcy Committee conducted its surveys and analyses through its Subcommittee on Judgeships. In considering each district’s bankruptcy judgeship request, the Bankruptcy Committee may recommend to the Judicial Conference one of seven options: one or more permanent judgeships, a temporary judgeship, a combination of permanent and temporary judgeships, the conversion of a temporary judgeship to a permanent judgeship, the extension of the term of an existing temporary judgeship, a judgeship to be shared by two or more districts; or, no changes to the district’s existing number and type of authorized judgeships. A permanent judgeship is a position that is statutorily added to the bankruptcy district’s current authorized total and remains authorized until statutorily rescinded. A temporary judgeship is a position that is statutorily created and authorized for 5 years after a judge is appointed to fill the temporary judgeship. It is important to note that it is the judgeship that is temporary, not the judge appointed to fill the position. The judge appointed to a temporary judgeship serves the same full 14-year term as a colleague appointed to fill a permanent position. When a temporary judgeship’s 5-year authorization expires, the next vacancy to occur in the district cannot be filled. However, between the time that the temporary judgeship expires and a vacancy occurs, it is possible for the district to have more judges than authorized judgeships. Converting a district’s existing temporary judgeship to a permanent judgeship reclassifies an existing judgeship, rather than adding a judgeship to a district’s existing authorized total. In 1991, the Judicial Conference established a process, with policies and weighted workload standards, for reviewing bankruptcy judgeships. The formal process has 8 basic steps (see fig. 1) that, when fully implemented, would take about 9 months to 1 year to complete. As I will discuss later in my testimony, this process was generally followed in developing the Judicial Conference’s 1993, 1995, and 1997 bankruptcy judgeship requests. The eight basic steps in this formal process are as follows: 1. The Bankruptcy Committee requests that the chief judge of each appellate, district, and bankruptcy court assess the need for additional bankruptcy judgeships within their respective jurisdictions based on the Judicial Conference’s policies. At the same time, the Committee provides each bankruptcy court (or district) information on its weighted filings per current authorized judgeship. 2. The bankruptcy and district courts provide their views on the need for additional judges to their respective circuit judicial councils. The bankruptcy court also sends its views to the district court.3. After reviewing the material provided by the bankruptcy and district courts, the circuit judicial council forwards its recommendations, which may differ from those of the bankruptcy and district courts in the circuit, to the Bankruptcy Judges Division of AOUSC, which serves as staff to the Bankruptcy Committee. 4. Under the direction of the Bankruptcy Committee’s Subcommittee on Judgeships, written mail surveys are sent to those districts for which judgeships have been requested. The Subcommittee on Judgeships conducts an on-site survey whenever a district initially requests additional judgeships. When a district renews a request previously approved by the Judicial Conference, but which Congress has not approved, the Bankruptcy Committee determines whether to conduct another survey. The on-site survey team is to generally consist of a bankruptcy judge member of the Bankruptcy Committee and staff of AOUSC’s Bankruptcy Judges Division. The team interviews a variety of court officials and local attorneys, and reviews court files, dockets, and reports. The survey team then prepares a written report with a recommendation to the Subcommittee on Judgeships regarding the bankruptcy court’s judgeship request. 5. For each bankruptcy district requesting judgeships, the Subcommittee on Judgeships reviews the district’s judgeship request, the district’s completed mail survey, and the on-site survey report (if done), then prepares a recommendation for the Bankruptcy Committee on the district’s judgeship request. 6. The Subcommittee sends its recommendations, along with the applicable on-site survey reports (where done), to the circuit councils, district courts, and bankruptcy courts in those circuits and bankruptcy districts for which bankruptcy judges were requested. The circuit councils, district courts, and bankruptcy courts may provide any comments they have on the Subcommittee’s recommendations, the survey report, and provide any other additional information they believe is relevant to the judgeship requests in their circuit or bankruptcy district. The Subcommittee on Judgeships reviews these comments, makes its final recommendation for each district, and sends its recommendations and accompanying documentation to the Bankruptcy Committee. 7. The Bankruptcy Committee reviews the mail survey, on-site survey report (if done), any other accompanying documents, and the Subcommittee on Judgeships’ recommendations for each district, votes on each request, and forwards its recommendations to the Judicial Conference. 8. The Judicial Conference considers the Bankruptcy Committee’s recommendations, approves or alters the Committee’s recommendations, and forwards the Conference’s final recommendations to Congress. In reviewing judgeship requests, the Bankruptcy Committee is to consider a number of factors adopted by the Judicial Conference in 1991. The first factor is weighted filings. Based on the results of a study of the time bankruptcy judges devoted to individual categories of bankruptcy cases,each case filed is assigned to 1 of 17 categories. Each category is determined on the basis of the bankruptcy chapter under which the case is filed, and within each chapter, the dollar value of the debtor’s assets or liabilities. A case weight is assigned to each of the 17 categories, representing the average amount of judicial time the case would be expected to require. Generally, to be eligible for an additional judgeship, the Judicial Conference expects a bankruptcy district to have a minimum annual average of 1,500 weighted filings for each current authorized judgeship. To be eligible for a permanent judgeship, the Judicial Conference’s standard is that a district’s weighted filings per judgeship must be 1,500 or higher after adding any judgeships to the district’s existing judgeship total. For example, a district with 5 judges could qualify for an additional permanent judgeship if its weighted filings per judgeship would be at least 1,500 with 6 judgeships (its existing 5 plus the requested position). If the weighted filings per judgeship would drop below 1,500 with the additional judgeship, the district could potentially qualify for a temporary, but not permanent, judgeship. The Judicial Conference’s policy recognizes that bankruptcy judges’ workloads may be affected by factors not captured in the most recent report of weighted filings and states that the Bankruptcy Committee is to consider a number of factors in addition to weighted filings. These factors include (1) the nature and mix of the court’s caseload; (2) historical caseload data and filing trends (generally, the most recent 5-year period); (3) geographic, economic, and demographic factors in the district; (4) the effectiveness of the requesting court’s case management efforts; (5) the availability of alternative solutions and resources for handling the court’s workload, such as assistance from judges outside the district; (6) the impact that approval of requested additional resources would have on the court’s per judgeship caseload; and (7) any other pertinent factors. The Bankruptcy Committee’s written description of the assessment process also recognized that (1) bankruptcy case filings may fluctuate because they are dependent upon national and local economic conditions, and (2) temporary fluctuations can often be addressed by short-term resources, such as temporary assistance from judges outside the district and the use of temporary law clerks. At its September 1996 meeting, the Judicial Conference approved a change in the schedule for completing the biennial surveys for evaluating judgeship needs for district courts, courts of appeals, and bankruptcy courts. Beginning in 1998, the surveys are generally to be done in even-numbered years so that the Conference’s recommendations for additional judgeships can be delivered to Congress in odd-numbered years. This change is intended to permit the judiciary to work with Congress on a judgeship bill over an entire 2-year congressional term. In 1993, 1995, and 1997, the Bankruptcy Committee generally followed the Judicial Conference’s established process, policies, and workload standards in assessing bankruptcy judgeship needs. The Bankruptcy Committee recommended to the Judicial Conference fewer judgeships than districts requested or the circuit councils recommended. Overall, the Committee also recommended fewer permanent and more temporary judgeships than were requested. The Conference adopted the Bankruptcy Committee’s recommendations in each year, 1993, 1995, and 1997. (See tables II.1 - II.3 in app. II for additional details.) In 1993, 16 districts requested 22 additional judgeships (21 permanent and 1 temporary). The Bankruptcy Committee’s Subcommittee on Judgeships conducted both a written mail survey and an on-site survey of each of the 16 bankruptcy districts that requested one or more additional judges. The Bankruptcy Committee recommended 19 additional judgeships (13 permanent and 6 temporary) for 15 judicial districts, and the Judicial Conference approved this recommendation in September 1993. The Bankruptcy Committee declined requests for 3 permanent judgeships and converted requests for 5 permanent judgeships to temporary judgeships. At its January and June 1994 meetings, the Bankruptcy Committee concluded that these 19 positions were still needed based on weighted filings alone. Congress did not approve any judgeships from the Judicial Conference’s 1993 request. At its January 1995 meeting, the Committee, using more recent statistical data, determined that some of the positions the Committee had approved in 1993 and 1994 may no longer have been needed. At this meeting, the Committee also adopted new guidelines for reassessing the additional judgeship positions that the Conference had approved in 1993 and 1994. Under the new guidelines, districts whose previously approved requests were still pending before Congress would be asked to reassess their need for these additional judgeship positions and submit a statement to the Committee on whether or not the positions were still needed. The Committee considered a position still needed, without a new survey, if the district’s weighted filings per authorized judgeship were 1,500 or more. The Committee retained the option to resurvey any district renewing its request for additional judgeships whose weighted filings were below 1,500 per authorized judgeship. The Bankruptcy Committee’s Subcommittee on Judgeships conducted on-site visits to each district for which an additional judgeship had been approved in 1993, and whose case filings during 1994 fell below 1,500 weighted filings per authorized judgeship. On the basis of these surveys, the circuit judicial councils of the Fifth and Ninth Circuits withdrew their requests for additional judgeships in the Southern District of Mississippi and the District of Arizona, respectively. In five other districts, the Circuit Councils reaffirmed their bankruptcy districts’ requests for a total of six judgeships. However, the Bankruptcy Committee declined the requests for these six judgeships. Overall, the Bankruptcy Committee recommended that the Judicial Conference reduce the number of requested positions from 19 judgeships in 15 districts to 11 judgeships (including 6 temporary) in 8 districts. The Conference approved the Bankruptcy Committee’s recommendation at its September 1995 meeting and transmitted it to Congress. Congress did not approve any judgeships from the Judicial Conference’s 1995 request. At its September 1996 meeting, the Judicial Conference approved a new schedule for judgeship surveys. As a result of this change and because Congress had not approved the Conference’s 1995 bankruptcy judgeship request, the Bankruptcy Committee began an expedited survey process in November 1996. In January 1997, the Bankruptcy Committee found that each of the 11 positions approved in 1995 continued to be needed based on the weighted case filings as of September 30, 1996. The Committee also considered requests for 9 additional positions (for a total of 20). In each district, the weighted filings per judgeship exceeded the 1,500 standard. The Committee recommended to the Judicial Conference 18 additional judgeships (including 11 temporary). The Judicial Conference adopted the Committee’s recommendations and sent the Conference’s judgeship request to Congress. The Conference’s 1997 request is now pending before Congress. Table 1 provides an overview of the number of judgeships requested and approved at each major step in the process in 1993, 1995, and 1997. In our analysis, we found that in the 1993 and 1997 assessment cycles, all of the districts requesting additional bankruptcy judgeships—16 in 1993 and 15 in 1997—had weighted case filings over 1,500 per authorized judgeship prior to the addition of any judgeships. However, in the 1995 assessment cycle, 8 of the 14 requesting districts had weighted case filings per judgeship over 1,500; the remaining 6 districts had weighted case filings below 1,500. (See table II.1 in app. II.) Our analysis also showed that the Judicial Conference approved additional permanent bankruptcy judgeships only when the weighted case filings would be 1,500 or more per judgeship after adding the requested judgeship(s) to the district’s current authorized number of judgeships. If the weighted case filings would drop below 1,500 per judgeship after adding the requested judge(s), the Bankruptcy Committee and the Conference approved a temporary judgeship or no increase in judgeships. In two districts, the Committee approved both one permanent and one temporary judgeship—the Southern District of New York in 1993, and the District of Maryland in 1997. In these two districts, the weighted workload was considered sufficiently high after adding one permanent judgeship to merit another judgeship, but not sufficiently high to merit a second permanent judgeship. Not all districts whose weighted case filings met the minimum threshold of 1,500 weighted filings per authorized judgeship requested additional judgeships in 1993, 1995, or 1997. We found that during the 1993 assessment cycle, 10 districts with weighted case filings above 1,500 per authorized judgeship did not request additional judges. In 1995, four such districts did not request additional judgeships; and, in 1997, five such districts did not. (See tables II.5-II.7 in app. II.) However, one of the five districts in 1997 was the Northern District of Mississippi, which is to share the additional position requested for the Southern District of Mississippi. Conversely, in 1995, six districts whose weighted filings were below 1,500 per authorized judgeship requested additional judgeships. None of these six districts’ requests were approved by the Bankruptcy Committee. (See table II.3 in app. II.) We spoke to officials in the four districts that had more than 1,500 weighted case filings per authorized judgeship in 1997, but had not asked for additional judgeships. The officials in these four districts told us that they had not requested any additional judgeships because (1) one district was not aware that its weighted case filings were at or above 1,500 per authorized judgeship; (2) one district said it could handle the workload if the district’s temporary judgeship, scheduled to expire in October 1998, was converted to a permanent judgeship; and (3) the remaining two districts currently share a judgeship and could not agree on how an additional judgeship would be allocated between the two districts. The Judicial Conference’s policy for assessing a bankruptcy district’s need for additional judgeships states that the Bankruptcy Committee is to review a number of workload factors in addition to weighted filings. These factors include the nature and mix of the bankruptcy district’s workload; historical caseload data and filing trends; geographic, economic, and demographic factors in the district; the effectiveness of case management efforts; the availability of alternative solutions and resources for handling the district’s workload; the impact that approval of requested additional resources would have on the district’s per judgeship caseload; and any other pertinent factors. The Bankruptcy Committee asked that districts requesting additional judgeships address these factors “with as much specificity as possible.” A district could also provide any additional information it thought relevant to its request. Most of the districts surveyed in 1993, 1995, and 1997 provided information on at least four of these factors. AOUSC officials said they provided us with all the written information on these factors that was available to the Bankruptcy Committee for its deliberations. AOUSC officials said that the use of this information in assessing judgeship requests is inherently judgmental and that neither AOUSC nor the Committee keeps minutes of the Committee’s discussions regarding individual districts. Consequently, it was not possible to determine from the documentation we received, how this information was or was not used in assessing districts’ bankruptcy judgeship requests. Nevertheless, none of the judgeship requests approved by the Judicial Conference were in districts that did not meet the 1,500 weighted filings standard. The Judicial Conference’s policies encourage districts to use visiting and recalled judges wherever possible as an alternative to requesting additional judgeships. For each district that requested additional bankruptcy judgeships in the 1993, 1995, and/or 1997 assessment cycles, we requested information on whether the districts had requested, received, and/or used assistance from visiting or recalled judges. The circuit executives for all 12 circuits provided us documentation on each of the bankruptcy districts that had requested and been assigned assistance from judges outside their districts in each of those years. However, the circuit executives did not have information on whether and to what extent the districts actually used the assistance available from visiting and recalled judges. Our analysis of this information showed that 18 of the 19 districts that requested additional bankruptcy judges during 1993 to1997 had requested assistance from judges outside their districts during this period. (See table II.4 in app. II.) Only the Middle District of Pennsylvania had not requested either visiting or recalled judges at some time during the period from January 1, 1993, to June 1997. Ten of the 18 districts that requested assistance received intracircuit assignments (judges from within their circuit) to provide assistance with their caseloads. None of the four districts in California relied on intracircuit assignments. These districts are in the Ninth Circuit, which uses its own “workload equalization program” that transfers cases from districts in the circuit with above-average caseloads to districts in the circuit that have below-average caseloads. This program allows cases to be transferred rather than judges. According to the circuit, transferring cases minimizes both the inconvenience to the parties involved as well as judges’ travel time and expenses. Six districts received intercircuit assignments (judges from outside their circuits) to provide assistance with their caseloads. Only four of these six districts received both intracircuit and intercircuit assignments of bankruptcy judges. Eleven of the 18 districts that requested assistance had been assigned recalled judges as a means to alleviate the heavy caseloads. Bankruptcy judges’ travel can be categorized as case-related and noncase-related. Case-related travel is travel to work on specific bankruptcy cases whether within a judge’s district or in other districts. Noncase-related travel is travel that is not related to adjudicating specific bankruptcy cases. The amount of time devoted to noncase-related travel could potentially affect the amount of time judges have to devote to work on individual cases. In assessing bankruptcy judges’ workloads, the Judicial Conference assumes that each bankruptcy judge will spend, on average, about 30 percent of his or her time—about 600 hours, or 75 work days per year—on matters that cannot be attributed to a specific case, such as travel, training, court administration matters, and general case management activities that cannot be attributed to a specific case. These 600 hours, or 75 work days, are in addition to the average of 1,500 hours or 187.5 workdays that each judge is assumed to spend annually on work attributable to specific bankruptcy cases. Through AOUSC, we requested information on the noncase-related travel of the judges in the 14 districts for which the Judicial Conference requested judgeships in 1997, plus the Northern District of Mississippi which is to share the position requested for the Southern District of Mississippi. We received information from 80 of the 84 judges in these districts judges on noncase-related travel in calendar years 1995 and 1996. These judges reported a total of 416 trips in 1995 and 403 trips in 1996. On the basis of the data reported, we calculated that these judges had an average of 12.5 noncase-related travel work days each year. As shown in table 2, there was a marked difference between the districts with the highest and lowest average number of noncase-related trips per judge and between the districts with the highest and lowest average number of workdays per judge for noncase-related trips. The reasons for these differences were not apparent from our data. Together, circuit or district meetings and activities; Judicial Conference meetings and activities; and AOUSC- or FJC-sponsored workshops, seminars, or other activities accounted for about 66 percent of all noncase-related trips and about 74 percent of all noncase-related travel workdays reported for 1995. Comparable figures for calendar year 1996 were about 67 percent and 73 percent, respectively. About 98 percent of the 819 trips were for destinations within the United States. Overall, about 34 percent of all trips made in these two years were sponsored by organizations other than the federal judiciary and were paid for by the judges themselves or the sponsoring organizations. You requested that we also obtain information on the noncase-related travel of the 13 authorized judges in the four districts with weighted filings of 1,500 or more in 1997 that did not request judgeships. The 12 judges in these 4 districts (one position was vacant) reported a total of 177 noncase-related trips—75 in calendar year 1995 and 102 in calendar year 1996. On the basis of these reported data, we calculated that the 12 judges spent a total of 178 workdays in 1995 and 258 workdays in 1996 on noncase-related travel. This is a per judge average of 14.8 workdays in 1995 and 21.5 workdays in 1996. Together, circuit or district meetings and activities; Judicial Conference meetings and activities; and AOUSC- or FJC-sponsored workshops, seminars, or other activities accounted for 72 percent of all noncase-related trips and about 79 percent of all noncase-related travel workdays reported for 1995. Comparable figures for calendar year 1996 were about 80 percent and about 83 percent, respectively. All but 1 of the 177 trips reported were for destinations within the United States. Overall, about 23 percent of all trips made in these 2 years were sponsored and paid for by organizations other than the federal judiciary. (Additional details are in app. II, tables II.8 - II.10.) On September 18, 1997, we provided a draft of this statement to AOUSC officials for comment. On September 19, 1997, we met with AOUSC officials to discuss their comments. Overall, AOUSC officials said they found the statement to be fair and accurate. AOUSC suggested that we change our description of the formal judgeship assessment process to state that on-site surveys are always to be done when a district made its initial request for additional judgeships, but are not required when the district renews a previously-approved request and district’s weighted workload remained at or above 1,500 weighted filings. AOUSC provided a formal written support for this change, and we incorporated the new language into our statement. AOUSC official also noted that judges’ personal vacations were not included in the average of 600 hours that bankruptcy judges are assumed to spend on activities that cannot be attributed to a specific case. We also included several technical changes, as appropriate. This concludes my prepared statement, Mr. Chairman. I would be pleased to answer any questions you or other members of the Subcommittee may have. To identify the process, policies, and standards the Judicial Conference used to assess the need for additional bankruptcy judgeships, we asked the Administrative Office of U. S. Courts (AOUSC) to provide all available documentation on the Conference’s policies, process, and standards from 1993 through 1997, including any changes that occurred during this period and the reasons for those changes. To determine how the process, policies, and standards were applied during the 1993, 1995, and 1997 assessment cycles, we asked AOUSC to provide all available documentation for each step in the process from the initial bankruptcy district request to the final Judicial Conference decision. With this documentation, we used a structured data collection instrument to review how the Conference’s process, policies, and standards were applied to each bankruptcy district’s judgeship request in 1993, 1995, and 1997. We also interviewed AOUSC officials about how the process, policies, and standards were used in the 1993, 1995, and 1997 assessment cycles. To determine which districts had requested and used temporary assistance from recalled judges or judges outside their districts from January 1993 to June 1997, we contacted each of the 12 circuit executives. AOUSC did not maintain these data, and the circuit executives had no consistent data on the extent to which the districts actually used the assistance available. To identify districts whose weighted case filings for each assessment cycle—1993, 1995, 1997—were at least 1,500 per authorized judgeship, but which did not request additional judgeships, we obtained AOUSC data on weighted filings for each of the 90 bankruptcy districts for each of those assessment cycles. To determine why each these districts did not request additional judgeships, we interviewed AOUSC officials. We also interviewed local court officials in the four districts with weighted filings of 1,500 or more during the 1997 assessment cycle that did not request additional judgeships. To identify the number, purpose, and destination of noncase-related trips for the judges in each of the 14 districts for which the Judicial Conference requested bankruptcy judgeships in 1997, through the AOUSC we surveyed the judges in each district, plus the Northern District of Mississippi, which is to share the judgeship requested for the Southern District of Mississippi. These 15 districts have a total of 84 authorized judgeships, and we received responses from 81 judges. However, one judge did not provide information on the dates of each trip or the paying organization. Thus, our analysis is based on the responses of 80 judges. We organized the reported trips into five categories: (1) judicial meetings and activities within the district or circuit; (2) workshops, seminars, and other activities sponsored by AOUSC or the FJC; (3) meetings, conferences, and seminars sponsored by the National Conference of Bankruptcy Judges (NCBJ), the National Association of Bankruptcy Trustees (NABT), or the National Association of Chapter 13 Trustees (NACTT); (4) Judicial Conference activities; and (5) other. We did not independently verify the data on weighted filings, nor the information bankruptcy judges provided on their noncase-related travel, including the dates, purpose, cost, destination, or paying organization for each trip. D.C. NY (E) NY (N) NY (S) PA (E) PA (M) VA (E) MS (S) MI (E) TN (W) CA (C) CA (E) CA (N) CA (S) FL (S) Table II.2: Results of the 1993, 1995, and 1997 Needs Assessments for Additional Bankruptcy Judges, by Type of Judgeship Convert temporary to permanent judgeships (Table notes on next page) D.C. NY (E) NY (N) NY (S) PA (E) PA (M) Weighted case filings after the judgeship approval 1 P & 1 T 1 P & 1 T 1 P & 1 T 1 P & 1 T 1 P & 1 T 1 P & 1 T 1 P & 1 T 1 P & 1 T (continued) VA (E) MS (S) MI (E) TN (W) CA (C) CA (E) CA (N) CA (S) Weighted case filings after the judgeship approval (continued) FL (S) P = Permanent judgeship T = Temporary judgeship N/D = Not documented Note 1: N/A indicates data were not sufficiently complete to be meaningful. The last formal surveys of the districts requesting additional bankruptcy judges were performed in 1993. Only when the requests were new (i.e., no survey had been performed since 1993) or if the weighted case filings were below 1,500 were surveys conducted. In most districts, the bankruptcy courts reviewed the weighted case filings data; and if the case filings were above the 1,500 threshold, the courts would renew their request through their respective Circuit Judicial Council. Thus, there is little documentation from the district courts and relatively few surveys were performed in 1995 or 1997. As a result, we did not attempt to factor in the data for the district courts or AOUSC surveys in these 2 years because the data would be misleading. Note 2: Based on guidance provided by AOUSC, unless documented otherwise, all requests by the bankruptcy courts for additional judgeships were assumed to be for permanent positions. Mississippi (Southern) did not ask for a specific number of judges in 1997, rather the district requested that a survey be performed to determine if any additional judgeships were warranted. Provided judges to districts in other circuits The Ninth Circuit uses the “work equalization program” in which cases from districts with above-average caseloads are transferred to districts with below-average caseloads. According to the circuit, this minimizes the inconvenience to the parties and reduces travel expenses. Because of this program, the cases within the Ninth Circuit are transferred rather than using intracircuit assignments of judges. CA (C) D.C. MI (E) NY (E) TN (W) NY (N) NY (S) PA (M) MS (N) AL (N) PA (E) VA (E) CA (S) CA (E) FL (S) CA (N) TX (N) GA (S) TN (M) TN (W) PA (M) CA (C) AL (N) PA (E) MI (E) NY (E) NY (N) FL (S) TN (W) CA (C) NY (N) MS (N) 0 AL (N) NY (E) PA (E) GA (M) MI (E) GA (S) VA (E) TX (E) MS (S) FL (S) PA (M) CA (E) While Mississippi (Northern) did not request a judgeship, it was to share the judgeship requested by Mississippi (Southern). Number of work days each year 30 Circuit or district meetings, activities 48 AOUSC or FJC workshops, seminars, 7 NCBJ, NABT, or NACTT conferences Judicial Conference meetings, activities 21 Other (e.g., law school seminars, bar association meetings) 37 Circuit or district meetings, activities 84 AOUSC or FJC workshops, seminars, 4 NCBJ, NABT, or NACTT conferences Judicial Conference meetings, activities 28 Other (e.g., law school seminars, bar association meetings) 12 Circuit or district meetings, activities 36 AOUSC or FJC workshops, seminars, 0 NCBJ, NABT, or NACTT conferences Judicial Conference meetings, activities 3 Other (e.g., law school seminars, bar association meetings) 11 Circuit or district meetings, activities 48 AOUSC or FJC workshops, seminars, 0 NCBJ, NABT, or NACTT conferences Judicial Conference meetings, activities 9 Other (e.g., law school seminars, bar association meetings) 9 Circuit or district meetings, activities (continued) Number of work days each year 3 AOUSC or FJC workshops, seminars, 0 NCBJ, NABT, or NACTT conferences Judicial Conference meetings, activities 2 Other (e.g., law school seminars, bar association meetings) 7 Circuit or district meetings, activities 12 AOUSC or FJC workshops, seminars, 0 NCBJ, NABT, or NACTT conferences Judicial Conference meetings, activities 2 Other (e.g., law school seminars, bar association meetings) 3 Circuit or district meetings, activities 0 AOUSC or FJC workshops, seminars, 0 NCBJ, NABT, or NACTT conferences Judicial Conference meetings, activities 4 Other (e.g., law school seminars, bar association meetings) 3 Circuit or district meetings, activities 13 AOUSC or FJC workshops, seminars, 0 NCBJ, NABT, or NACTT conferences Judicial Conference meetings, activities 0 Other (e.g., law school seminars, bar association meetings) The Middle and Southern Districts of Georgia share a bankruptcy judgeship. The travel data for this shared judgeship are included in the totals for the Middle District of Georgia. The Eastern District of Texas has two authorized bankruptcy judgeships, but one of the positions is vacant. Currently, the second judge in the district is a recalled judge. Our analysis excluded the travel data for the recalled judge because we did not receive or report travel data for recalled judges in the 15 districts for which we reported in our correspondence of August 8, 1997. Judicial Conference meetings, activities 15 Other (e.g., law school seminars, bar association meetings) Birmingham, AL (6); Montgomery, AL (3); Washington, D.C. (2); Talladega, AL; Troy, AL; Perdido Beach, AL; Orlando, FL 25 Circuit or district meetings, activities Birmingham, AL (9); Panama City, FL (6); Tuscaloosa, AL (4); Decatur, AL (3); Anniston, AL (3) San Francisco, CA (7); Atlanta, GA (6); San Antonio, Tx (3); Kansas City, MO (2); Tempe, AZ; Mobile, AL; Chicago, IL; Philadelphia, PA 1 NCBJ, NABT, or NACTT Judicial Conference meetings, activities 15 Other (e.g., law school seminars, bar association meetings) Boston, MA (3); San Francisco, CA (2); Washington, D.C. (2); Amelia Island, FL; San Antonio, TX; Atlanta, GA; Augusta, GA 0 NCBJ, NABT, or NACTT Judicial Conference meetings, activities (continued) Destination (number of trips) Amelia Island, FL (2); San Francisco, CA; Brunswick, GA; Atlanta, GA; Savannah, GA 0 NCBJ, NABT, or NACTT Judicial Conference meetings, activities 1 Other (e.g., law school seminars, bar association meetings) 1 Circuit or district meetings, activities New Orleans, LA 0 AOUSC or FJC workshops, (continued) Destination (number of trips) Tyler, TX (2); San Francisco, CA; Washington, DC; San Antonio, TX 0 NCBJ, NABT, or NACTT Judicial Conference meetings, activities 0 Other (e.g., law school seminars, bar association meetings) The Middle and Southern Districts of Georgia share a bankruptcy judgeship. The travel data for this shared judgeship are included in the totals for the Middle District of Georgia. The Eastern District of Texas has two authorized bankruptcy judgeships, but one of the positions is vacant. Currently, the second judge in the district is a recalled judge. Our analysis excluded the travel data for the recalled judge because we did not receive or report travel data for recalled judges in the 15 districts for which we reported in our correspondence of August 8, 1997. Federal Judiciary (12) 2 NCBJ, NABT, or NACTT Judicial Conference meetings, activities (continued) Number of trips each year 15 Other (e.g., law school seminars, bar association meetings) Arts Council (4); Univ. of Alabama Law School (3); Cumberland School of Law (2); Alabama Bar Assoc. (2); U.S. AID (2); Alabama Bankers; American Bar Assoc. Federal Judiciary (22) 1 NCBJ, NABT, or NACTT Judicial Conference meetings, activities 15 Other (e.g., law school seminars, bar association meetings) Alabama State Bar (4); Arts Council (3); American Bar Assoc. (2); Cumberland School of Law (2); U.S. AID; ABI; Alabama Courts; Assoc. of Bankruptcy Judicial Assts. Federal Judiciary (11) Federal Judiciary (16) 0 NCBJ, NABT, or NACTT Judicial Conference meetings, activities 3 Other (e.g., law school seminars, bar association meetings) Federal Judiciary (3) (continued) Number of trips each year 0 NCBJ, NABT, or NACTT Judicial Conference meetings, activities 1 Other (e.g., law school seminars, bar association meetings) Federal Judiciary (6) 0 NCBJ, NABT, or NACTT Judicial Conference meetings, activities 1 Other (e.g., law school seminars, bar association meetings) 1 Circuit or district meetings, activities Federal Judiciary 0 AOUSC or FJC workshops, 0 NCBJ, NABT, or NACTT Judicial Conference meetings, activities 2 Other (e.g., law school seminars, bar association meetings) Federal Judiciary (5) 0 NCBJ, NABT, or NACTT Judicial Conference meetings, activities 0 Other (e.g., law school seminars, bar association meetings) (Table notes on next page) The Middle and Southern Districts of Georgia share a bankruptcy judgeship. The travel data for this shared judgeship are included in the totals for the Middle District of Georgia. The Eastern District of Texas has two authorized bankruptcy judgeships, but one of the positions is vacant. Currently, the second judge in the district is a recalled judge. Our analysis excluded the travel data for the recalled judge because we did not receive or report travel data for recalled judges in the 15 districts for which we reported in our correspondence of August 8, 1997. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed the federal judiciary's assessment of its bankruptcy judgeship needs in the 1993, 1995, and 1997 assessment cycles. GAO found that: (1) the Judicial Conference's Bankruptcy Committee and the Judicial Conference generally followed the Conference's process and policies, and consistently applied the Conference's workload standards in assessing individual districts' requests for additional judgeships; (2) neither the Committtee nor the Conference approved any request for additional judgeships from districts whose weighted case filings did not meet the minimum standard; (3) the Bankruptcy Committee also asked that districts requesting judgeships provide information on several factors, other than weighted filings, that may affect their need for additional judges; (4) according to officials at the Administrative Office of the U.S. Courts (AOUSC), neither the Committee nor the Conference keeps written documentation on how the available data were used in assessing judgeship requests; (5) according to AOUSC, the use of such information is inherently judgmental; (6) time devoted to noncase-related travel could affect the time judges have to devote to individual cases; (7) in assessing bankruptcy judges' workload, the Judicial Conference assumes that a bankruptcy judge will spend, on average, about 30 percent of his or her time-about 600 hours, or 75 work days per year-on noncase-related matters; (8) GAO received information on non-case related travel from 80 of the 84 authorized judges in the 15 districts that would receive or share one of the judgeships requested in 1997; (9) these 80 judges reported a total of 416 noncase-related trips in 1995 and 403 in 1996, and GAO calculated that they each traveled an average of 12.5 work days for non case-related travel in each of these years; (10) about 98 percent of these trips were made to destinations within the United States; (11) together, circuit or district meetings and activities, Judicial Conference meetings and activities, and workshops, seminars, and other activities sponsored by the AOUSC or the Federal Judicial Center, accounted for about 66 percent of all trips and 74 percent of all non-case related travel workdays in 1995; and (12) comparable figures for 1996 were about 67 percent and about 73 percent, respectively. |
The FHLBank System and the enterprises are GSEs. Congress created GSEs to help make credit available to certain sectors of the economy, such as housing and agriculture, in which the private market was perceived as not effectively meeting credit needs. GSEs receive benefits from their federal charters that help them fulfill their missions. The federal government’s creation of and continued relationship with GSEs have created the perception in the financial markets that the government would not allow a GSE to default on its obligations, even though intervention is not required. As a result, GSEs can borrow money in the capital markets at lower interest rates than comparably creditworthy private corporations that do not enjoy federal sponsorship, and market discipline is reduced. In fact, during the 1980s, the government did provide limited regulatory and financial relief to Fannie Mae when it experienced significant financial difficulties, and, in 1987, Congress authorized $4 billion to bail out the Farm Credit System, another GSE. Additional background on the FHLBank System, the enterprises, FHFB, OFHEO, and financial risks are presented in appendix II. Our mandate directs us to analyze interest rate, credit, and operations risks. Interest rate risk is a component of what is commonly called market risk. Market risk is the potential for financial losses due to the increase or decrease in the value or price of an asset or liability resulting from broad movements in prices, such as interest rates, commodity prices, stock prices, or the relative value of foreign exchange. Credit risk is the potential for financial loss because of the failure of a borrower or counterparty to perform on an obligation. Credit risk may arise from either an inability or unwillingness to perform as required by a loan, a bond, an interest rate swap, or any other financial contract. Operations risk is the potential for unexpected financial losses due to inadequate information systems, operational problems, breaches in internal controls, or fraud. It is associated with problems of accurately processing or settling transactions and with breakdowns in controls and risk limits. Individual operating problems are considered small-probability but potentially high-cost events for well-run firms. Operations risk includes many risks that are not easily quantified, but controlling these risks is crucial to a firm’s successful operation. The FHLBank System is establishing a new capital structure that will include new risk-based and leverage capital requirements and will also make capital more permanent. FHLBank capital will continue to differ from capital issued by publicly traded corporations, however, because of the cooperative nature of the FHLBank System. Additionally, each FHLBank’s capital is potentially available throughout the System, because the FHLBanks are jointly and severally liable for the System’s outstanding debt securities. The unique characteristics of FHLBank capital and the potential for risk taking within the System heighten the importance of supervisory oversight by FHFB. The new capital structure being implemented by the FHLBank System will include risk-based and leverage capital standards. In January 2001, FHFB published a final rule to comply with the provisions of GLBA that required regulations prescribing uniform capital standards applicable for all FHLBanks. These new capital standards, when fully implemented, will replace the current “subscription” capital structure for the FHLBanks. Under the current structure, the amount of capital that each FHLBank issued was determined by a statutory formula that dictated how much FHLBank stock each member had to purchase. A principal shortcoming of the subscription capital structure was that the amount of capital each FHLBank maintained bore little relation to the risks inherent in the FHLBank’s assets and liabilities. Under the new structure, FHLBanks will be required to maintain longer-term permanent capital and total capital in amounts sufficient for the FHLBanks to comply with the minimum risk- based and leverage capital requirements established by GLBA. We have consistently supported the concept of risk-based capital standards applied in combination with a leverage ratio that requires a minimum capital-to-asset ratio for the FHLBanks. A risk-based capital standard has a number of benefits. First, it gives the government a mechanism to influence risk-taking without involving itself in the FHLBanks’ daily business. Second, it gives FHLBanks’ shareholders an incentive to demand that management not take undue risks, since increased risk taking would impose additional costs resulting from raising additional capital. Third, it provides a buffer that should be adequate to absorb unforeseen losses to FHLBanks and thus helps prevent or reduce potential taxpayer losses. The new capital structure the FHLBank System is implementing will also result in more permanent capital. After the enactment of GLBA in 1999, membership in the FHLBank System became all voluntary. Voluntary members can generally redeem stock with 6 months’ notice. Capital redeemable on such short notice does not provide a cushion against unexpected losses. Therefore, the change to all voluntary members increased the need for more permanent capital that could not necessarily be redeemed with 6 months notice, and GLBA required implementation of a more permanent capital structure. Under the new capital structure, the FHLBanks are permitted to issue Class A stock, which can be redeemed with 6 months’ notice, and Class B stock, which can be redeemed with 5 years’ notice, or both. To help ensure that capital does not dissipate due to redemption in time of stress, GLBA does not allow a FHLBank to redeem or repurchase capital if following the redemption the FHLBank would fail to satisfy any minimum capital requirement. Based on discussions with FHFB officials and their review of draft capital plans, it appears that a majority of FHLBanks might initially implement an exclusive Class B stock structure, while other FHLBanks might implement a mixed structure. The presence of 5-year capital, combined with the requirement that member institutions lose benefits of membership in the System if they withdraw capital, acts to create a financial interest that mirrors some, though certainly not all, characteristics of publicly traded perpetual equity stock. Permanent capital is defined in GLBA as amounts paid in for Class B stock plus the retained earnings. Class A stock plus permanent capital is to be at least 4 percent of assets. Class A stock plus 1.5 times permanent capital is to be at least 5 percent of assets. Therefore, a FHLBank meeting the 4 percent requirement will also meet the 5 percent requirement if its permanent capital equals at least 2 percent of assets. In addition, only permanent capital is included in the capital definition for the risk-based capital component of the minimum capital standards. Although the new capital structure will result in more permanent capital, FHLBank capital will continue to differ from the capital issued by publicly traded corporations such as the enterprises or banks. The voluntary, cooperative nature of the FHLBank System means that capital in this system has characteristics different from capital issued by publicly traded corporations. First, the FHLBank stock will not be perpetual equity stock like that issued by publicly traded corporations. Stock issued by publicly traded corporations can be bought and sold freely and publicly at a market- determined price. In contrast, a FHLBank member institution can redeem FHLBank stock at par value as long as all restrictions are met. For example, a member can withdraw capital with prior notice (i.e., of 6 months or 5 years) if after redemption the FHLBank satisfies all minimum capital requirements. However, FHLBank member institutions lose benefits of membership in the System if they withdraw minimum capital required for membership. This lessens incentives to remove capital, if, for example, FHLBank earnings declined. Second, investors cannot be obligated to buy the stock of publicly traded corporations. However, FHLBank members can be required to buy additional FHLBank stock to ensure that the FHLBank meets its capital requirements. Third, corporations with publicly traded stock have responsibilities to maximize the value of their stock. In contrast, FHLBanks have incentives to provide the best mix of services and dividend payments to their member-owners. Under the new capital structure, the capital of each FHLBank will continue to be available to other FHLBanks in the System because the FHLBanks are jointly and severally liable for the System’s outstanding debt securities, called consolidated obligations. Joint and several liability for the payment of consolidated obligations gives investors confidence that System debt will be paid. Another related characteristic of joint and several liability is that it potentially creates a large pool of capital from all FHLBanks to provide a cushion in the event of unexpected System losses. However, joint and several liability also puts all FHLBanks at risk because of the possibility that one FHLBank could become troubled and not be able to meet its debt obligations. In such a situation, the troubled FHLBank would have incentives to undertake risky activities because profits would accrue to the FHLBank’s owners, whereas losses and erosion of capital could fall on others. This scenario creates incentives for the FHLBanks to monitor each other’s activities, which FHLBank officials told us they do through a number of System-wide bodies of representatives from the 12 FHLBanks. In theory, joint and several liability appears to make most System capital available in the event of large, unexpected losses in the System. However, concerns about how joint and several liability would operate in the event of a default or delinquency on a consolidated obligation prompted FHFB to issue regulations in 1999. The regulations establish a process by which FHFB will look first to the assets of a FHLBank that received the proceeds of the consolidated obligation. The regulations also contain certification and reporting requirements with which the FHLBanks must comply. For example, the FHLBanks must certify before the end of the each calendar quarter that they will remain in compliance with the liquidity requirements and will remain capable of making full and timely payments on their consolidated obligations. A FHLBank that is unable to provide the required certification must provide additional notifications to FHFB, such as a payment plan specifying the measures the FHLBank will take to make full and timely payments of all its obligations. The regulations also specify that FHFB may order any FHLBank to make principal and interest payments due on any consolidated obligation in the System. In this case, each contributing FHLBank is entitled to reimbursement from the FHLBank that was responsible for making the payment. Liability is to be allocated among the other FHLBanks on a pro rata basis in proportion to each FHLBank’s participation in all consolidated obligations. Joint and several liability provides incentives for the FHLBanks to monitor each other and appears to make most System capital available in the event of large, unexpected losses in the System. However, joint and several liability in a cooperative system has never been tested. The FHLBanks have never defaulted on principal or interest payments due on a consolidated obligation. Another cooperative GSE with joint and several liability, the Farm Credit System (FCS), experienced severe economic stress in the middle-1980s. To provide a broader perspective on joint and several liability, we obtained information on the FCS experience during and following its financial rescue by the federal government. Figure 1 describes the collapse and bailout of FCS in the 1980s and describes the problems invoking joint and several liability in FCS. FHFB supervisory oversight is a very important aspect of implementing a new capital structure. The extent to which the new structure results in an improvement over the old one depends on how the structure is implemented and on FHFB’s oversight of the process. Many of the details of the new capital structure will be contained in the capital plans the FHLBanks are currently submitting to FHFB. The approach and criteria FHFB will use to review and approve the capital plans are being determined. We looked at the Basel Committee on Banking Supervision’s New Capital Accord, which is based on three pillars: minimum capital requirements, a supervisory review process, and effective use of market discipline. Although the New Capital Accord is to be applied to banks and their holding companies, its principles can be applied to GSEs as well. However, GSE status reduces market discipline, increasing the importance of supervision. Beyond the FHLBank System’s status as a GSE, the unique characteristics of FHLBank capital and the potential for risk taking within the System heighten the importance of supervisory oversight by FHFB. First, even after the new capital structure is in place, FHLBank capital will be less permanent than perpetual equity stock. Therefore, more so than other regulators, FHFB must be prepared to act in case a FHLBank’s financial condition weakens. Second, although joint and several liability creates incentives for the FHLBanks to monitor each other’s activities, the FHLBanks do not have the authority to direct a financially troubled FHLBank to take corrective actions. However, FHFB does have authorities it can use to take enforcement actions in such a situation. We last examined FHFB’s supervisory oversight of the FHLBank System in 1998. We concluded that FHFB’s safety and soundness regulation is increasingly important to protect taxpayer interests due to the System’s expanding activities and the changing business environment. We found deficiencies in FHFB’s oversight of FHLBanks and made a number of recommendations to improve it. FHFB officials told us they have made progress in implementing these recommendations. However, we have not examined FHFB’s supervisory oversight since completing our 1998 report, and therefore we have not verified the completeness of these actions. Expansion in the types of eligible collateral and increased direct mortgage acquisition will increase interest rate, credit, and operations risks in the FHLBank System. Interest rate risk, however, will remain unaffected by the new forms of collateral. The overall amount of risk introduced will depend on the type and amount of advances and mortgage acquisitions undertaken by the FHLBanks, the implementation of risk management practices by the FHLBanks, and oversight provided by FHFB. The new capital structure has the potential to address the risks associated with advances and mortgage acquisitions, because of greater capital permanence, leverage capital requirements, and the development of risk- based capital standards. However, capital requirements will not be finalized until FHFB approves capital plans developed by the FHLBanks. GLBA authorizes advances to member community financial institutions that utilize small business and agricultural loan collateral. These advances are inherently more risky than traditional advances backed by mortgages and generate credit risk that is more difficult to evaluate. However, the financial management policies of the FHLBanks, as reported to FHFB, that we have reviewed reflect the perception that the new collateral will entail greater credit risks than residential mortgage collateral, and the policies call for higher collateral levels compared to traditional advances. The FHLBanks have also begun implementation of direct mortgage acquisitions with the program begun by the FHLBank of Chicago accounting for a majority of the System’s acquisition activity to date. Based on existing direct mortgage acquisition activity, direct acquisition appears to provide regional diversification of mortgage acquisitions and incentives for sound underwriting by member institutions from member exposure to credit risks. However, this activity is relatively new, and its level is expected to grow, thereby increasing risks. In addition, risks could be affected if changes are made in the risk-sharing agreements between the FHLBanks and their member institutions. Increased activity in direct mortgage acquisitions by FHLBanks could also increase competition with the enterprises in the secondary mortgage market. Such increased competition could provide benefits to borrowers, but could also generate additional risks for the FHLBanks, the enterprises, depository institutions, and taxpayers. Credit and operations risks for traditional advances utilizing home loan and related types of collateral are relatively low. However, GLBA authorized advances to community financial institutions utilizing small business and agricultural loan collateral that will likely introduce greater credit and operations risk. Interest rate risk will not change, and FHLBanks will continue to manage this risk as they have managed it for traditional advances. The FHLBanks have extensive experience in managing their traditional advance business and have developed financial management policies for managing risks, as required by FHFB. In addition, according to FHFB, the FHLBanks typically require 10 to 25 percent more than the value of an advance in collateral. Largely due to collateral protection and the System’s lien status, FHLBanks have never experienced a credit loss on their advance business. In contrast to their traditional advance business, advances to community financial institutions utilizing small business and agricultural loan collateral are inherently riskier and generate credit risk that is more difficult to evaluate. First, small business and agricultural loans are more heterogeneous than single-family residential mortgage loans. In particular, small business loans finance businesses involved in a wide range of economic activities. Unlike mortgage loans that have fairly homogeneous characteristics, loans to a wide variety of sectors are more difficult to analyze. In addition, the value of each business is determined largely by the performance of those operating it. In contrast, appraising the value of a housing unit providing collateral for a single-family residential mortgage loan is more straightforward. Operations risk would also increase, because FHLBanks have not fully developed the expertise, information systems, and operational procedures necessary for these new activities. Both FHFB and the FHLBanks recognize that the new collateral will entail greater credit risks than residential mortgage collateral. FHFB requires a FHLBank, prior to accepting the new collateral for the first time, to file a notice to demonstrate that the FHLBank has the capacity to manage the risks associated with the new types of collateral to be accepted.According to FHFB, the FHLBanks are requiring 65 to 150 percent more collateral over the size of advances when the collateral is loans secured by small businesses or farms. Consistent with the stringency of their financial management policies, officials from the FHLBanks told us that they currently anticipate a low level of funding utilizing small business and agricultural collateral. The FHLBanks have tools to manage interest rate risk, and the introduction of the new forms of collateral for advances will not change the way this risk is managed. The principal source of funds for FHLBanks is the consolidated debt obligations of the System. According to FHFB, each FHLBank calculates various measures of its exposure to interest rate risk. One of the measures is duration of equity. This measures the sensitivity of market value of equity to changes in interest rates. FHFB’s financial management policy specifies duration of equity limits, and the FHLBanks are to report the results of their duration of equity calculations to FHFB each quarter. If interest rate risk is well hedged, the market value of equity will change little as interest rates fluctuate. The FHLBanks have lien status in which their rights to the collateral they hold generally have priority over other security interests, including insured deposits, in the assets of failed insured financial institutions. Historically, all advances have been secured with collateral. More recently, FHLBanks have also required collateral to secure member-provided credit enhancements on mortgages FHLBanks acquire directly. By statute, FHLBank security interests generally have priority over the claims and rights of any party, including receivers, conservators, and trustees. This preference can result in increased costs to the Federal Deposit Insurance Corporation (FDIC) in resolving a possible bank or thrift failure. Potential expansion in FHLBank System advances, collateral, and direct mortgage acquisition activities could therefore also increase resolution costs to the FDIC. Interest rate, credit, and operations risks will increase from the direct mortgage acquisition programs implemented by the FHLBanks. Holding mortgage assets exposes FHLBanks to interest rate risk, because the FHLBanks assume the risk for any changes in the market value of the retained mortgage assets. If interest rates increase at a time when new debt has to be issued, borrowing costs will increase while returns from fixed-rate mortgage asset holdings remain constant. Because borrowers tend to prepay and refinance their mortgages when interest rates decline, falling interest rates carry another form of interest rate risk called prepayment risk. To the extent that FHLBanks rely on long-term debt that cannot be refinanced, returns will fall without a corresponding decline in debt costs. The prepayment risk associated with mortgage holdings differs from that associated with advances, because the advances to member institutions carry prepayment penalties. The FHLBanks, however, currently have experience in managing prepayment risk because they have investment holdings of mortgage-backed securities (MBS) and the associated prepayment risk. The FHLBanks and the enterprises tend to use financial instruments such as long-term, callable debt to limit their exposure to interest rate risk from holdings of mortgage assets. FHLBanks use derivatives and callable debt to hedge interest rate risk resulting from direct investments in mortgage assets. To the extent that the duration of mortgage assets differs from that of debt obligations, FHLBanks often enter into a matching interest rate exchange agreement. This agreement is one form of financial derivative called an interest rate swap, in which the counterparty pays cash flows to the FHLBank designed to mirror, in timing and amount, the cash outflows the FHLBank pays on the consolidated obligation. The FHLBanks also use other financial arrangements to manage interest rate risk. For example, callable debt allows the FHLBank as issuer to buy (i.e., call) back issued debt when interest rates decline. Callable debt is attractive as a source of funds for mortgage asset holdings, because borrowers tend to prepay their mortgages and refinance when interest rates decline. FHLBanks had $224.5 billion of callable debt outstanding as of December 31, 2000, out of total consolidated obligations of about $592 billion. Direct mortgage acquisitions expose the FHLBanks to credit risk. To qualify for FHLBank purchase, as is true for purchase by the enterprises, mortgage insurance is required for mortgage loans with loan to value ratios of over 80 percent. FHLBank purchases have included conventional mortgage loans with private mortgage insurance as well as mortgage loans with federal guarantees or insurance. The FHLBanks’ credit risk management includes enforcement of lender guidelines for member institutions participating in direct mortgage acquisition. The FHLBanks have established stated actions they will take to ensure that member institutions follow these guidelines. For example, the FHLBanks are to collect quality control reports from participating members and perform a quality control review on a sampling of the mortgages purchased from each member. Participating members are also subject to audit by the FHLBank or its designated agents. FHLBank establishment and enforcement of guidelines for participating members help the FHLBanks mitigate credit risk by increasing the degree of assurance that lenders meet fundamental standards for originating and servicing mortgages. The FHLBanks credit risk management also includes implementation of lender credit enhancement requirements that subject participating member institutions to credit risk. For example, the FHLBank establishes an account in which payments to member institutions are reduced in the event of mortgage defaults. These credit enhancements further help the FHLBanks mitigate credit risk by creating incentives for sound mortgage underwriting and servicing by participating members. The FHLBanks also seek wide geographic distribution of their mortgage acquisitions to limit their exposure to any particular regional economic downturn. Lenders who hold mortgages and member institutions use an infrastructure to manage their credit risk that is different from the infrastructure used by secondary market entities, such as the enterprises. These institutions can benefit in their management of credit risk from their potential ability to better understand their local markets and thereby the credit risk associated with mortgages they fund or mortgages they sell in which they still take on credit risk. In addition, institutions that take on credit risk from mortgages they originate do not face the moral hazard problems secondary market entities have when they purchase mortgages and take on the associated credit risks. To address the moral hazard problem, secondary market entities develop infrastructures to oversee the lending and servicing practices of lenders from whom they purchase mortgages. Direct mortgage acquisitions expose the FHLBanks to operations risk, because in the past the FHLBanks had not developed the expertise, information systems, and operational procedures to approve and oversee lenders. Exposure to operations risk is related to the FHLBanks’ exposure to credit risk, because new operating infrastructure and procedures are necessary to the extent that member exposure to credit risk reduces the moral hazard problem faced by the FHLBanks. If the FHLBanks have little exposure to credit risk and moral hazard, then operations risk will be lower. The actions taken to avoid moral hazard, including systems used to provide lender oversight, entail operations risk. In contrast, credit and operations risks from traditional advances have been minimal because of collateral requirements. As of December 31, 2000, the FHLBanks held slightly over $15 billion in fixed, long-term, single-family mortgages compared to about $1.4 billion as of year-end 1999. The FHLBank of Chicago held about half of total mortgage loans in the System. The majority of direct acquisition activity to date has been accounted for by the program begun by the FHLBank of Chicago, which is named Mortgage Partnership Finance (MPF). MPF was initiated on a pilot basis beginning in 1997. The 10 FHLBanks from Boston, New York, Pittsburgh, Atlanta, Indianapolis, Chicago, Des Moines, Dallas, Topeka, and San Francisco, currently participate in MPF. Although MPF offers multiple products, they share some common characteristics. First, mortgage purchases are limited to mortgage loans below the conforming loan limit for the enterprises, which is currently $275,000 for a single-family housing unit. Second, the FHLBank holds an account with funds generated from transactions between the FHLBank and the member bank. This account takes the first-loss position after primary mortgage insurance payments; that is, costs due to borrower mortgage defaults are taken from this account before other sources of funds are utilized to cover credit losses. The funds are generated by providing the FHLBank a price deduction at time of sale and/or from an annual flow of payments. The latter device is often called a spread account, because it represents a spread between payments due to the member institution from the FHLBank (e.g., to compensate the member for taking on credit risk) and payments actually made by the FHLBank.Third, for some MPF products the member institution is required to supply additional credit enhancements in the form of direct loss guarantees and/or supplemental insurance to provide a second-loss position before the FHLBank is exposed to credit losses. The loss positions taken by the first-loss account and the second-loss supplemental insurance and lender guarantees are lender provided credit enhancements. By FHFB regulation, the FHLBank requires the member institution to provide collateral to secure direct loss guarantees provided by the lender. The collateral is protected by the lien status applicable to collateral used to secure advances. The FHLBank of Chicago has two primary means of achieving regional diversification of its credit risk. First, it purchases mortgages from member institutions that are affiliated with large, nationwide lenders, and second, it invests in the mortgage acquisitions (called participations) made by the nine other FHLBanks that participate in MPF. Table 1 presents the geographic distribution of MPF mortgages as of year-end 2000. Based on all MPF mortgage loans to date, it appears that regional diversification has been achieved. According to FHLBank of Chicago officials, MPF serves both large and small FHLBank member institutions. The FHLBanks of Cincinnati, Indianapolis, and Seattle participate in the other direct mortgage acquisition program, which is named the Mortgage Partnership Program (MPP). MPP was initiated near year-end 2000. As of year-end 2000, less than $500 million in mortgage loan holdings were accounted for by the FHLBanks participating in MPP. MPP is in its infancy compared to MPF. The products share some of the basic characteristics of MPF. A notable difference between MPP and MPF is that to date MPP participants have only been larger member institutions. Another difference is that the FHLBank MPP participants generally do not expect to enter into participations with the other MPP FHLBanks, even though the program parameters allow for such participations. Without joint participation among the three FHLBanks on individual mortgage pools, geographic diversification of mortgage assets might be limited if small member institutions, which are not diversified geographically, provide a large share of MPP activity. Two major FHFB regulatory requirements that limit the risks of MPF and MPP are (1) the member institution is to assume the first-loss position in the transaction as defined by FHFB and (2) each loan pool is to receive an investment grade rating based on FHFB approved rating criteria and loan pools with ratings below AA (i.e., double-A) must be supported by additional retained earnings or reserves. FHFB regulations require member institutions to be in the first-loss position (i.e., after primary mortgage insurance). FHFB uses an economic definition of first-loss position in implementing its regulation. In an accounting sense, it may not be apparent that the member is in a first-loss position, because the account that takes the first-loss might not be on the balance sheet of the member institution. However, the member institution is at risk because defaults reduce payments from the first-loss account to the member institution. These payments represent a fee paid to members for assuming credit risk. When losses from defaults occur, the account covers the losses and payments to the member are subsequently reduced. Therefore, this structure should help provide incentives to member institutions through the sharing of credit risks for sound underwriting and loan servicing practices. Another FHFB regulatory requirement that limits the risks of MPF and MPP is the requirement that each loan pool receive an investment grade rating based on FHFB approved rating criteria, and loan pools with ratings below double-A must be supported by additional retained earnings or reserves. FHFB has approved rating criteria contained in the computer package LEVELS, a product of the rating agency Standard & Poors. To date, participating FHLBanks have required a double-A rating. A double-A rating is the second highest rating attainable. LEVELS considers credit risk characteristics for loans in a mortgage pool, such as loan-to-value ratio, mortgage insurance coverage, economic conditions and expected house price changes in the metropolitan area where the residence is located, and borrower credit history. Based on these characteristics, LEVELS calculates the credit support necessary from the first-loss account and, when applicable, supplemental insurance to achieve the double-A rating. Standard & Poors officials we interviewed stated that LEVELS provides a comprehensive credit analysis of a mortgage pool. They also told us that LEVELS does not consider some factors that could affect FHLBank risk exposure such as the capacity of the member institution and the first-loss account to meet continuing obligations. FHFB’s required investment grade rating, especially if participating FHLBanks require a double-A rating from LEVELS, should help to limit credit risk faced by the FHLBanks based on a thorough credit analysis of each mortgage pool. Participating FHLBanks can further limit credit risk and thereby improve the performance of their acquired mortgage portfolios above what the LEVELS’ model predicts by achieving regional diversification of their portfolios. In addition, LEVELS does not consider factors such as concentrations of FHLBank credit risk with individual member institutions that may have limited capacity to meet their continuing obligations. Due in part to strategies to limit credit risk that can be implemented by participating FHLBanks and risk factors not considered by LEVELS, capital supervision of direct mortgage acquisitions by FHFB is important to ensure the safety and soundness of the System. FHFB published a risk-based capital regulation on January 30, 2001, that, if properly implemented, can establish a capital structure with the potential to address the increased risks of new activities. The capital regulation establishes classes of capital with varying degrees of permanence, leverage requirements, and risk-based capital requirements to be implemented. Each FHLBank is expected to hold capital commensurate with its credit, interest rate, and operations risk. FHFB’s risk-based capital regulation requires credit risk to be calculated using four broad categories based on an evaluation of the credit risk associated with different types of assets and positions. This evaluation is based in part on the loss history of relevant assets with particular ratings and maturities. FHFB directed each FHLBank to develop its own internal risk-based model to estimate interest rate exposures and calculate risk-based capital requirements for interest rate risk. These internal models are to be approved by FHFB in connection with the approval of each FHLBank’s capital plan, which is to be submitted to FHFB by October 29, 2001. The internal models must meet FHFB’s technical restrictions and use interest rate scenarios approved by FHFB. FHFB’s regulation includes a risk-based capital requirement to cover operations risk. FHFB’s minimum leverage requirement establishes two activity-based minimum capital ratios; both ratios must be met. The simplest measure is total capital equal to 4 percent of assets. The second measure is total capital equal to 5 percent of assets when permanent capital is weighted by 1.5 and other capital is weighted by 1. Only permanent capital is included in the capital definition for the risk-based capital component of the minimum capital standards. FHFB’s capital regulation included capital requirements for the credit risk of assets in two categories: advances and rated mortgage assets.According to the published regulation, the credit risk capital requirement for advances was based on the highest estimated (proportional) loss by rating category and maturity class observed over a 2-year period of actual corporate bond data from the interval 1970 to 1999. FHFB also used its judgment to establish capital requirements. FHFB officials told us that the numeric capital requirements are subject to refinement based on FHFB’s ongoing research. FHFB’s risk-based capital requirement for advances assumes little credit risk exists. Although FHLBanks have never incurred credit losses on advances backed by traditional mortgage collateral or securities, FHFB decided to impose capital requirements on advances. The capital requirement on long-term advances is higher than on short-term advances. FHFB used its judgement to set a capital requirement on all advances that includes some credit risk. This capital requirement is intended to reflect the potential credit risks created by new types of collateral. FHFB oversight of collateral policies and other aspects of FHLBank risk management of new collateral will be important, because all advances are included in the same category, and the new collateral entails greater credit risk than traditional advances collateral. Credit risk percentage requirements for residential mortgage assets are based on FHFB’s analysis of residential MBS and their ratings. In developing the capital requirements for mortgage assets, FHFB also took into account the requirements set by other regulators. In general, the risk- based capital requirements for mortgage assets, such as mortgages on both single-family and multifamily units or MBS, vary with the creditworthiness of the assets. FHFB’s capital regulation, with its rating-based approach, allows capital requirements to vary based on the credit risk of the mortgage assets. In the case of MPF and MPP, participating FHLBanks have required a credit rating of double-A on each mortgage pool acquired. As stated earlier in this report, the double-A rating is to be based on a thorough credit analysis of each mortgage pool acquired. MPF and MPP assets are expected to become an increasing part of the assets held by the FHLBanks. FHFB directed each FHLBank to create its own internal risk-based model to estimate interest rate risk exposures and calculate risk-based capital requirements for interest rate risk. The exposure to interest rate risk in each model is to depend on the level of stress from interest rate movements taking into account any hedges that affect the actual exposure to interest rate movements. These internal models must meet FHFB’s technical requirements and use interest rate scenarios approved by FHFB. FHFB’s regulation contains a stated preference that the internal models created by the FHLBanks be based on a value at risk approach. Using this approach, the loss is estimated based on several possible interest rate patterns in the future. FHFB must approve the interest rate scenarios used in the internal models and has placed some technical requirements on the models themselves. Each FHLBank is required to have sufficient permanent capital to meet the value at risk level established by FHFB, as well as other capital requirements. FHFB’s regulation requires that the FHLBanks maintain sufficient risk- based capital to cover operations risk, although GLBA did not stipulate such a requirement. FHFB’s capital requirement for operations risk is 30 percent of the total capital required to cover interest rate and credit risk, but it may be reduced to no lower than 10 percent if a FHLBank can demonstrate to the satisfaction of FHFB that it has insurance or some other means to justify the reduction. Appendix IV contains further discussion of FHFB’s capital regulation. As alternatives to holding mortgages on their own balance sheet, depository institutions have a number of ways to obtain GSE funding for mortgage assets and thereby transfer some or all of the related risks. How these assets are funded and how the risks are transferred or shared has important implications for regulatory capital treatment at both the depository institution and at the GSE. For example, when mortgages along with all the attendant risks are sold outright to a GSE, the only relevant capital requirement would be at the GSE level. Alternatively, when a depository institution purchases an MBS issued by a GSE, there is a capital charge imposed at the depository level that is to reflect the credit risk of GSE obligations as well as a capital charge at the GSE level. For those funding arrangements in which credit risk is maintained, in whole or in part, at the depository institution level, the capital treatment by the depository institution regulators and the GSE regulators interact. From an integrated perspective it is important that risks and capital requirements are in proper relation to one another. Otherwise certain arrangements can be disadvantaged if capital charges are too high or advantaged if they are too low. As such, supervision is particularly important. Depository institutions that engage in secondary market transactions with GSEs must hold capital based on (1) the amount of GSE obligations in their portfolios, (2) their capital investment in the GSEs, and (3) the risks retained when selling or transferring mortgage assets to a GSE. The depository regulators we interviewed told us that they generally assign relatively low credit risk weights to depository institution holdings of GSE obligations, because they take into account the perception of implied federal backing of GSE obligations. The regulators told us that depository institution holdings of FHLBank debt, enterprise debt, and enterprise MBS are in the 20 percent risk category. Thus, rather than the general requirement of $8 in capital for each $100 of assets in the 100 percent risk category, such as unsecured loan assets, $1.60 of capital is required (that is, 8 percent of $20). Therefore, depository institutions that sold mortgages in the secondary market and purchased an equivalent amount of GSE backed MBS would lower their credit risk and their capital requirements. In fact, the combined capital requirement, including the capital requirement at the GSE level, would be lower possibly reflecting the GSEs’ ability to reduce overall credit risk through geographic diversification. Currently, depository institutions are required to hold $4 in capital for each $100 in mortgage loan holdings and $1.60 of capital for each $100 in enterprise MBS holdings, and the enterprises are required to hold capital equal to 0.45 percent of MBS issued and held by outside investors. Thus, the transfer can result in $2.05 of total capital required rather than $4 of capital required without the transfer of assets. The depository institution regulators have also established capital requirements for the risk associated with depository institution investments in GSE equity. The regulators told us that currently FHLBank capital is in the 20 percent risk category although they are actively reviewing this capital treatment and considering the new capital structure being established for the FHLBank System. In addition, they told us that enterprise equity is generally in the 100 percent risk category; the one exception is the Office of the Comptroller of the Currency, the regulator of national banks, which places enterprise equity in the 20 percent risk weight category. The regulators stated that their supervision activities address concentrations of pledged assets and risks at individual depository institutions that could result from heavy reliance on FHLBanks as a funding source. The depository institution regulators have provided guidance on the risk- based capital treatment of only one MPF program, MPF 100. Under this program, the member institution acts as agent for the FHLBank, underwriting, servicing and providing a credit enhancement for residential mortgage pools. The member receives fees for the credit enhancement that it provides. The FHLBank provides a first-dollar loss protection cushion equal to 100 basis points of the total mortgage pool’s unpaid balance. As the FHLBank incurs credit losses allocable to this protection, the credit enhancement fees paid by the FHLBank to the member are reduced. However, the credit enhancement fees are not recorded on the balance sheet of the member institution until received. The second-loss credit enhancement provided by the member institution is sized so that the senior piece held by the FHLBank would have the credit quality equivalent to a double-A rating. The depository institution regulators determined that since expected receipt of the guarantee fees by the member institution is not a balance sheet asset, and since the member institution is under no obligation to pay anything to the FHLBank, there is no risk of loss to the member’s capital. The only consequence to the member institution in the case of credit losses is the receipt of a lower level of credit enhancement fees. Because expected credit losses would not affect the member’s balance sheet, the depository institution regulators determined that the FHLBank is in the first-loss position. They determined that the member institution’s capital requirement would be based on the face value of the second-loss credit enhancement. In contrast, the FHFB analysis of the credit enhancement structure of MPF 100 leads to a different result. According to the FHFB analysis, because the member institution’s credit enhancement fees are reduced if the FHLBank incurs losses from the first-loss cushion due to mortgage defaults, the member’s fees are contingent upon the performance of the mortgage pools. FHFB determined that because the member bears the economic responsibility of the expected credit losses from the first dollar of loss, the member is effectively in the first-loss position. While the depository institution regulators have provided guidance on one particular mortgage participation product, they have yet to opine on others. It is also likely that new products could arise with various combinations of credit risk sharing arrangements. The depository institution regulators have issued proposed regulations that would change the risk-based capital treatment of credit enhancements. The rules currently in effect provide for differing capital treatment for credit enhancements that have the same economic effect, depending on whether the credit enhancement is retained in a sale of assets or acquired in some other way. The regulators have proposed a more consistent treatment of economically equivalent credit enhancements. The cost of regulatory capital associated with credit enhancements could change based on the content of final regulations. During the course of this assignment, enterprise officials we interviewed raised questions about the adequacy of the capital structure of the FHLBank System as it relates to the risks posed by the direct acquisition of mortgages. In particular, it was suggested that if you view the FHLBank System, including its membership, as if it were a holding company then the System, in certain cases, could be viewed as engaging in “double leveraging.” According to this view, the member financial institutions use their own capital to directly support their own activities but finance their purchases of FHLBank stock with deposits, debt, or other instruments not acceptable as regulatory capital. Figure 2 provides more information on double leveraging. Enterprise officials told us that FHLBank capital is not adequate to support the risks of direct mortgage acquisition, because debt issued to finance the investment is a liability on the balance sheet of the FHLBank, the investment is an asset on the balance sheet of the FHLBank, and the capital of the FHLBank is downstreamed noncapital proceeds from member institutions. This approach appears to be an analogy based on accounting flows resulting from on-balance sheet investments by the FHLBanks. Based on our analysis, there appear to be countervailing factors that lessen the applicability of the analogy as a way of analyzing the ability of capital to address the risks of FHLBank mortgage acquisitions. First, the approach focuses on leverage directly, rather than on the relationship between capital and risks. The present risk sharing arrangements, which include the requirement that the member institution be in the first-loss position, limit credit risk to the FHLBank. At the member institution level, depository institution regulators rely on supervisory tools to limit exposure to potential risks resulting from FHLBank mortgage acquisitions. As a second countervailing factor, MPF and MPP are not the only secondary mortgage market programs that reduce total capital requirements. When the enterprises purchase mortgages from depository institutions, capital requirements for the depository institutions are reduced without a corresponding increase in enterprise capital requirements. As stated above, depository institution regulators generally assign relatively low credit risk weights to depository institution holdings of GSE obligations. One reason why the capital requirements at the enterprise level are lower is that the enterprises can reduce credit risk through geographic diversification of their mortgage servicing portfolios. However, the relationship between the credit risk reduction from geographic diversification and the reduction in total capital required has not been established. As in the case of FHLBank mortgage acquisitions, depository institution regulators rely on regulatory oversight. As a third countervailing factor, even if capital should be consolidated between FHLBanks and member institutions in some manner, the holding company analogy lacks sufficiency as a method of analysis. Consolidation of balance sheets has the most merit in the case of a parent holding company that controls a subsidiary in which the parent funds the closely controlled subsidiary with instruments not acceptable as regulatory capital and in turn uses the subsidiary as an investment vehicle. In the case of the FHLBank System, a parent holding company does not exist. The FHLBank System is a cooperative in which member institutions provide System capital, but no one member appears to hold a controlling interest in the corporate governance decisions of any one FHLBank. In addition, joint and several liability combined with all voluntary membership motivates the FHLBanks to monitor each other’s financial activities. While we have treated the concept of double leveraging as a distinct issue, the more fundamental concern raised by the enterprises appears to be associated with the nature of FHLBank capital. While we agree that the capital is not perpetual equity capital, it will become more permanent. However, we have not addressed the issue as to whether 5-year capital combined with statutory and regulatory restrictions on withdrawal of capital will result in the optimal level of permanence. MPF and MPP, while structured differently than the secondary market products offered by the enterprises, can generate increased competition in the secondary mortgage market. In a 1996 report, we addressed the implications of authorizing another GSE to compete with the enterprises.In that report, we assumed that the newly authorized GSE would have a similar charter and be subject to the same regulatory requirements to compete with the enterprises. Therefore, the GSE would also operate in a similar manner to the enterprises. We indicated that such authorization could increase the overall amount of GSE activity in the mortgage market and, as a result, raise the potential amount at risk in case of a government bailout; increase the level of GSE risk, because entities operating in new markets often have greater managerial and operations risk than those operating in established markets; increase credit risk if the new entity attempted to establish market share by lowering underwriting standards; and increase competition and thereby reduce mortgage interest rates to borrowers. Risks in the FHLBank System will increase from its direct mortgage acquisition activity. The acquisition activity could also generate benefits to borrowers and potential risks for the enterprises. The degree to which increased competition could affect risk-taking by the FHLBanks and the enterprises is among the unknowns in this competitive process. However, such developments also create potential risks for taxpayers and therefore challenges for both FHFB and OFHEO. The introduction of the mortgage acquisition programs by the FHLBank System has implications for competition between and the regulatory oversight of the System and the enterprises. The mortgage acquisition programs of the FHLBank System increase competition between the System and the enterprises. In past reports we have recommended, and we still support, combining the GSE regulators into one agency and authorizing the agency to oversee both the safety and soundness and mission compliance of the FHLBanks, Fannie Mae, and Freddie Mac. We have pointed out the advantages of combining oversight responsibilities in one agency. Such an agency could be more independent and objective than the separate regulatory bodies and could be more prominent than either one alone. Although the GSEs operate differently, the risks they manage and their missions are similar. The regulators’ expertise in evaluating GSE risk management could be shared more easily within one agency. In addition, a single regulator would be better positioned to be cognizant of specific mission requirements, such as special housing goals and new programs or initiatives any of the GSEs might undertake, and should be better able to assess the competitive effect on all three housing GSEs and better ensure consistency of regulation for GSEs that operate in similar markets. Having all staff in one regulatory agency should also facilitate coordination and sharing of expertise among staff responsible for safety and soundness and mission compliance. Given the introduction of mortgage acquisition programs by the FHLBanks, the ability of a single regulator to assess competitive effects among the three housing GSEs and to better ensure consistency of regulation for the housing GSEs becomes relatively more important. FHFB and OFHEO risk-based capital regulations are meant to ensure that the FHLBanks and enterprises maintain sufficient capital to weather stressful economic conditions and address credit, interest rate, and operations risks. However, we are unable to assess the relative stringency of each regulator’s approach to risk-based capital, for two reasons. First, the final specifications of the risk models for both OFHEO and FHFB are not yet available. Second, even if the final specifications were available, differences in the assets and liabilities held by the FHLBanks and the enterprises create different risk patterns. These differences, in turn, led to different modeling approaches, making comparisons difficult. Although we cannot provide an overall assessment of the stringency of each regulator’s approach, we can compare certain attributes of the modeling approaches and their strategies and procedures for estimating credit, interest rate, and operations risk. We also provide a comparison of the effects of the leverage requirement on the FHLBanks and the enterprises. GLBA gave FHFB discretion to establish credit and interest rate scenarios to be covered by permanent capital. In implementing GLBA, FHFB decided to require FHLBanks to hold capital for operations risk. The amount of permanent capital required under the risk-based capital regulation is the sum of capital for credit risk, interest rate risk, and operations risk. Figure 3 is a simplified illustration of FHFB’s approach to risk modeling and calculating capital. The Federal Housing Enterprises Financial Safety and Soundness Act of 1992 (the 1992 act) established OFHEO as an independent regulator within the Department of Housing and Urban Development (HUD). OFHEO’s mission is to ensure the enterprises’ safety and soundness. The 1992 act also authorized OFHEO to develop a risk-based capital regulation that addresses credit, interest rate and operations risks. OFHEO began developing its regulation upon its creation in 1993. OFHEO has developed its own cash flow model to estimate risks and calculate the total capital needed to cover credit and interest rate risk. The 1992 act specified the stresses that the model must address. The risk-based capital regulation also requires capital for operations risk. For risk-based capital, total capital is the sum of a general allowance for foreclosure losses, common stock, perpetual noncumulative preferred stock, paid-in capital, and retained earnings. OFHEO did its own modeling of the risks for both enterprises so that the enterprises would face identical analytical measures of their risks based on their own assets, liabilities, and off-balance sheet positions. However, the model this approach uses does not reflect any business strategies that are unique to either enterprise. Figure 4 is a simplified illustration of OFHEO’s approach to risk modeling and risk-based capital calculation. OFHEO runs a single model in which the capital calculations for credit risk and interest rate risk are based on the model’s estimates of how much capital each enterprise needs. This approach ensures that both credit risk, which is based on benchmarklosses, and interest rate risk are integrated in a cash flow model. Appendix IV provides a more detailed description of FHFB’s and OFHEO’s risk- based capital requirements. Generally, FHFB has not directly modeled risks in its risk-based capital regulation. For credit risk, FHFB has depended on data on historic losses, the loss history of relevant assets with particular ratings and maturities, and its own judgments to determine appropriate levels of risk-based capital. For interest rate risk, FHFB decided to establish a framework that each FHLBank must adhere to when it models its own interest rate risk. This approach made it possible for FHFB to publish its regulation within 15 months of GLBA. However, we have not been able to evaluate the interest rate risk models that are yet to be developed by each FHLBank and subsequently approved by FHFB. In contrast, OFHEO used a complex modeling approach to determine risks and calculate required capital. This approach permitted OFHEO to fine- tune feedbacks between interest rate risk and credit risk and explicitly model the factors that created losses associated with particular assets. However, this approach was difficult to implement and created delays in the actual implementation of risk-based capital regulations for the enterprises. Under the 1992 act, Congress set criteria for OFHEO to use in establishing the stress test for credit, interest rate, and operations risk in risk-based capital regulation. In contrast, GLBA required FHFB to create risk-based capital requirements for the FHLBanks taking due consideration of any risk-based capital test established by OFHEO pursuant to the 1992 act. GLBA allowed FHFB to choose the economic scenarios used in modeling credit and interest rate risks. On its own initiative, FHFB added operations risk to its version of risk-based capital regulation. FHFB developed capital calculations based on balance sheet data, the market value of the portfolio for interest rate risk, and expected losses for credit risk. OFHEO developed capital calculations that begin with initial balance sheet positions but then use a 10-year cash flow stress test based on specified interest rate scenarios and credit stresses over the 10-year period. In the 1992 act, OFHEO was directed to run its model assuming that no new business would occur during the 10-year stress period except for already committed business of the enterprises. Therefore, enterprise assets, liabilities, and off-balance sheet positions decline over time in OFHEO’s model. FHFB’s balance sheet approach estimates the market value of the FHLBank’s portfolio at risk under the financial stress scenarios and thus does not require an assumption about new business. FHFB’s test is to be applied monthly while OFHEO’s test is to be applied quarterly. FHFB and OFHEO have different strategies for calculating the capital needed to cover risks. FHFB requires that the FHLBanks calculate the capital needed to cover credit risk and interest rate risk separately. OFHEO jointly calculates capital needed for credit risk and interest rate risk. FHFB stated that in periods of stress, a positive correlation exists between interest rate risk and credit risk. Given this positive correlation, they stated that a separate calculation of interest rate risk and credit risk is a conservative approach to calculating required capital. In contrast, OFHEO officials stated that their single calculation of the capital needed to cover credit and interest rate risk permits the model to deal with real- world feedbacks between interest rate movements and credit losses. FHFB and OFHEO also calculate capital required for operations risk based on the amount of capital required for credit and interest rate risk, although FHFB may reduce the amount required if a FHLBank demonstrates that it qualifies for a lower requirement. FHFB’s and OFHEO’s actual procedures for estimating credit stresses and calculating the capital required to cover credit risk differ. FHFB uses asset and position credit risk categories and assigns credit risk capital requirements for assets and positions in each category. In making these determinations, FHFB uses its own judgment and available information on factors such as default losses, credit ratings, and capital regulations for other regulated firms. For mortgage assets acquired from members with credit risk-sharing arrangements, FHFB depends on the results of a model from a credit rating agency to estimate and limit credit risk. In contrast, OFHEO uses a more granulated approach based on detailed econometric modeling. This approach allows the agency to address the effects of numerous variables on credit losses directly in its own model. FHFB’s and OFHEO’s approaches to calculating the capital required to cover interest rate risk differ. FHFB uses a value at risk model that estimates changes in the value of capital based on hundreds of historical interest rate scenarios that represent possible stresses on the FHLBanks. The scenarios are to be applied to each FHLBank’s balance sheet and should represent periods of significant economic stress. The interest rate scenarios are based on actual interest rate changes during periods that last 120 business days and cover historical interest rate movements since 1978. The test requires the FHLBank to hold capital sufficient to cover all but the worst 1 percent of potential losses. In contrast, OFHEO uses a 10-year cash flow model and two interest rate scenarios—one for a rising rate and the other for a falling rate. In each OFHEO interest rate scenario, the interest rate adjusts during the first year and then remains at the new level for the remainder of the 10-year period. According to OFHEO officials, both interest rate changes are greater than what has been observed historically over any 1-year period. The amount of capital required to cover interest rate risk is the amount of capital needed to cover the worst of the two mandated interest rate scenarios. Although GLBA did not require FHFB to establish a risk-based capital requirement to cover operations risk, FHFB decided such a requirement was needed. FHFB’s capital requirement for operations risk is 30 percent of the total capital required to cover interest rate and credit risk but may be reduced to no lower than 10 percent if a FHLBank can demonstrate to the satisfaction of FHFB that it has insurance or some other means to justify the reduction. In contrast, the 1992 act that directed OFHEO to establish risk-based capital requirements for operations risk specified that capital for operations risk be equal to 30 percent of the total capital required for credit and interest rate risks. Minimum leverage requirements establish minimum capital levels a firm must hold irrespective of the level of risk it assumes. The leverage ratios required by statute differ for the FHLBanks and enterprises. The minimum leverage ratio for FHLBanks is measured in two ways; both ratios must be met. The simplest measure sets total capital at 4 percent of assets. The second measure sets total capital at 5 percent of assets, with permanent capital weighted by 1.5 and other capital weighted by 1. For the enterprises, the minimum leverage requirement is based on both the on- balance sheet and off-balance sheet positions. Off-balance sheet positions are generally guaranteed mortgage-backed securities held by investors but managed by the enterprises. Thus, the OFHEO rule includes more than just the assets held by the enterprises. The required leverage ratio for on- balance sheet assets is 250 basis points (2.5 percent), while the ratio for off-balance sheet positions is generally 45 basis points (.45 percent). FHFB and OFHEO also define capital for the leverage ratios differently. OFHEO uses core capital in the minimum leverage requirement. Core capital is the sum of outstanding common stock, outstanding perpetual noncumulative preferred stock, paid-in capital, and retained earnings. FHFB’s total capital for the leverage ratio includes shorter-term Class A stock, longer-term Class B stock, and retained earnings. FHFB’s alternative 5-percent leverage ratio reflects the longer-term nature of Class B stock and retained earnings by valuing Class B stock and retained earnings at 150 percent of par value when calculating capital for the 5- percent leverage ratio. To the extent that FHLBanks develop a capital structure based on Class B stock, they will be using more permanent capital. In contrast, enterprise capital is never redeemable. FHFB officials said they anticipate that when the capital plans are implemented, the risk-based capital requirement for all FHLBanks will be below the minimum leverage requirements under GLBA. This will be the case, in part, because FHLBanks are expected to establish an exclusive Class B or a mixed Class A and B capital structure. FHFB officials told us that based on seven draft capital plans submitted to FHFB, six of the FHLBanks indicated that they expect to establish an exclusively Class B structure initially because of the adverse tax consequences associated with a multiple class structure. However, three of these FHLBanks indicated that they anticipate issuing Class A stock in the future. Over time, issuing Class A stock and increasing mortgage acquisitions could cause a FHLBank’s risk-based capital requirement to exceed its leverage requirement. However, FHFB’s risk-based capital requirement is unlikely to constrain operations initially, given the current business of the FHLBanks. OFHEO’s risk-based capital requirement may limit the enterprises more than the leverage requirement. In the Second Notice of Proposed Rulemaking, OFHEO estimated that Fannie Mae would not have had sufficient capital to meet its the risk-based capital requirement on either September 30, 1996, or June 30, 1997, although Freddie Mac would have been in compliance with its risk-based capital requirement. However, both Fannie Mae and Freddie Mac had sufficient capital to meet the leverage requirement. In FHFB’s risk-based capital regulation, the capital structure plan of each FHLBank is to specify the date on which the plan shall take effect and may provide for a transition period of up to 3 years to allow the FHLBank to come into compliance. During the transition period the FHLBanks are expected to remain in compliance with the preexisting leverage based requirement. FHFB officials told us that the implementation of the risk- based capital requirements depends on the submission of capital plans, including internal models for interest rate risk, from all FHLBanks by October 29, 2001. In addition, FHFB must approve the plans, including any transition plans needed to ensure that the FHLBanks attain compliance with risk-based capital requirements. For the enterprises, the risk-based requirement becomes effective when the final rule is published in the Federal Register and can be enforced 1 year after it is published. The rule for the capital requirement was cleared by the Office of Management and Budget on July 16, 2001. The FHLBank System is currently establishing a new capital structure that, if properly implemented, is likely to be an improvement over the historic structure. Capital will become more permanent and new risk-based and leverage capital requirements will also be implemented. The new capital structure has the potential to address the risks associated with advances as well as the direct acquisition of mortgages. However, it is too early to assess the overall adequacy of the structure, because the capital plans and risk management practices to be implemented by the FHLBanks and capital supervision practices to be followed by FHFB are not yet known. Based on activity to date, direct acquisition appears to provide regional diversification of mortgage acquisitions and incentives to member institutions for sound mortgage underwriting and servicing through the sharing of credit risks. However, risks could be affected if changes are made in the level of mortgage acquisition activity and in the risk-sharing agreements that are currently present between the FHLBanks and their member institutions. Such changes might also increase the importance of risk-based capital requirements compared to FHFB leverage requirements. Going forward, risks in the FHLBank System will increase due to expanded collateral provisions in GLBA and direct mortgage acquisition activity. Effective mitigation of that risk will depend on risk management by the FHLBanks, the adequacy of the capital structure, and oversight by FHFB. In addition to the FHLBanks, the acquisition activity could also generate additional risks for the enterprises. Although currently the FHLBank System and the enterprises primarily engage in different business activities, these differences may decrease if direct mortgage acquisition activity grows dramatically. Having one housing GSE regulator for safety and soundness and mission compliance would provide greater independence and objectivity, greater prominence, improved ability to assess the competitive impact of new initiatives on all housing GSEs, and improved ability to ensure consistency of regulation of GSEs that operate in similar markets. This report does not contain any new recommendations. The Chairman of FHFB provided written comments on a draft of this report, and these comments are reprinted in appendix V. FHFB and OFHEO provided technical comments on a draft of this report. The FHLBanks, enterprises, and depository institution regulators also provided technical comments on draft excerpts of this report that we shared with them. We incorporated technical comments into this report where appropriate. The Chairman of FHFB stated that we did a commendable job of analyzing important and complex FHLBank System issues. His letter drew attention to some of our findings related to the potential of the new capital structure for the FHLBanks to address risks and the MPP and MPF programs. His letter also stated that our past recommendations, with regard to regulatory oversight, have been well received with many having been implemented. FHLBank of Chicago officials wanted us to characterize the MPF first-loss account as an account established by the FHLBank, rather than as a lender provided credit enhancement. Our characterization is based on the FHFB requirement that the member institution bear the economic cost of expected credit losses. For example, the MPF arrangement in which the FHLBank is reimbursed by the member institution when defaults occur through the reduction of fees paid to the member is a mechanism in which the lender’s credit enhancement is used to improve the rating of the mortgage pool acquired by the FHLBank. A Freddie Mac official provided comments addressing Freddie Mac’s concern about “double leveraging.” He stated that in addition to the risks posed by the direct acquisition of mortgages, Freddie Mac also has a broader concern that relates to the overall fragility of the FHLBank System. He stated that the risk of member institutions withdrawing their capital in response to FHLBank losses is a direct result of the nonpermanent nature of the FHLBank System capital stock even after the GLBA reforms. He specifically referred to the potential for a run on the FHLBank System if member institutions had advanced knowledge of potential future financial losses. We have addressed the question of capital adequacy directly by analyzing the relationship between capital and risks. We have treated the concept of double leveraging as a separate issue. In our discussion of the double leveraging concept, we made revisions to reflect the concern about the nature of FHLBank capital. We will send copies of this report to the Chairman of the Board of FHFB, Director of OFHEO, Presidents of the FHLBanks, Chief Executive Officer of Fannie Mae, and Chief Executive Officer of Freddie Mac. We will also make copies available to others upon request. Please contact me or William B. Shear at (202) 512-8678 if you or your staff have any questions concerning this report. Key contributors to this report were Rachel DeMarcus, Kristi A. Peterson, and Mitchell B. Rachlis. To describe the capital structure of the Federal Home Loan Bank (FHLBank) System, we reviewed Federal Housing Finance Board (FHFB) capital standards and regulations; conducted research on the role of capital in government-sponsored enterprises (GSE) with a cooperative system; reviewed our prior work addressing risk-based capital and the FHLBank System; and interviewed financial institution regulatory body and GSE officials. To analyze the adequacy of the capital structure of the FHLBanks, we also reviewed relevant literature on interest rate, credit, and operations’ risks; analyzed FHLBank proposals for the use of expanded collateral provisions and permissible uses of advances under the Gramm-Leach-Bliley Act (GLBA) of 1999; and analyzed FHLBank applications to FHFB and other information on FHLBank direct mortgage acquisition programs. During the course of this assignment, officials from Fannie Mae and Freddie Mac made presentations to us and provided extensive information reflecting their perspectives on the adequacy of the capital structure of the FHLBank System. On May 17, 2001, Freddie Mac provided us a consultant’s report addressing the adequacy of the capital structure of the FHLBank System. We considered the information provided by the enterprises in conducting our work. We analyzed information the FHLBanks considered to be proprietary. Therefore, we did not report specific details of the various FHLBank products. For example, due to this limitation, we did not report data on the Mortgage Partnership Program and provided general information on Mortgage Partnership Finance. To compare and contrast the risk-based capital standards proposed by FHFB to the standard proposed by OFHEO, we analyzed the standards; reviewed information provided by and interviewed officials from the enterprises, the FHLBanks, FHFB, and OFHEO; and reviewed comments on the proposed standards. The FHLBanks are yet to complete their capital plans implementing their new capital structures, which limited the scope of our analysis. In addition, although we made observations of some elements of risk management that appear to be present at the FHLBanks, we did not analyze risk management procedures employed by the FHLBanks, FHFB’s oversight of risk management, nor the risks associated with FHLBank investments. Furthermore, we did not verify the accuracy of data provided by FHFB and the FHLBanks. We also did not analyze the risks of activities that have been or might be undertaken by either Fannie Mae or Freddie Mac. We conducted our work in Washington, D.C., between February 2001 and June 2001, in accordance with generally accepted government auditing standards. Written comments on a draft of this report from FHFB appear in appendix V. We also obtained technical comments from the FHLBanks, enterprises, depository institution regulators, FHFB, and OFHEO that have been incorporated where appropriate. The FHLBank System is a GSE consisting of 12 federally chartered FHLBanks and the System’s Office of Finance that are privately and cooperatively owned by member institutions. The FHLBanks are located in Boston, MA; New York, NY; Pittsburgh, PA; Atlanta, GA; Cincinnati, OH; Indianapolis IN; Chicago, IL; Des Moines, IA; Dallas, TX; Topeka, KS; San Francisco, CA; and Seattle, WA; with each FHLBank serving a defined geographic region of the country. The FHLBanks raise funds by issuing consolidated debt securities in the capital markets. The System was set up in 1932 to extend mortgage credit by making loans, called advances, to its member institutions, which in turn lend to home buyers for mortgages. Home mortgage loans and other collateral secure advances. These advances help member institutions, originally limited to thrifts and insurance companies, by enhancing liquidity and providing access to national capital markets. In 1989, as part of the Financial Institutions Reform, Recovery, and Enforcement Act (FIRREA), Congress opened membership to nonthrift federally insured depository institutions that offer residential mortgage loans. Thrifts with federal charters remained in the System as mandatory members while nonthrift institutions were voluntary members. GLBA created all voluntary membership and expanded the purposes of System advances with corresponding expansion in eligible collateral for community financial institutions. As of December 31, 2000, the FHLBanks held about $438 billion in advances to members; $186 billion in investments, $16 billion in directly acquired mortgage assets; and $31 billion in capital, of which $728 million was in the form of retained earnings. In addition, the System had 7,777 members, which included 5,681 commercial banks, 1,547 thrifts, and 549 credit unions and insurance companies. Additional financial information on the FHLBanks is presented in appendix III. Congress chartered Fannie Mae and Freddie Mac as government- sponsored, privately owned and operated corporations to enhance the availability of mortgage credit across the nation during both good and bad economic times. Fannie Mae’s headquarters is located in Washington, D.C. and Freddie Mac’s is in McLean, Virginia. The enterprises are to accomplish this mission by purchasing mortgages from lenders (banks, thrifts, and mortgage bankers) who can then use the proceeds to make additional mortgage loans to home buyers. The enterprises issue debt to finance mortgage assets that they retain in their portfolios. A majority of purchased mortgages, however, are pooled to create mortgage-backed securities (MBS) that are sold to investors. The enterprises collect fees for guaranteeing the timely payment of principal and interest on MBS held by investors. At year-end 2000, the enterprises had combined debt obligations of about $1.1 trillion and combined MBS obligations to investors of about $1.3 trillion (a total of about $2.4 trillion). Additional financial information on the enterprises is presented in appendix III. FIRREA created FHFB as an independent agency within the executive branch, with a five-member board of directors. FHFB is organized into 6 offices and had about 95 permanent employees as of December 31, 2000. FHFB’s annual budget is about $24 million, which is financed with assessments on the FHLBanks. The functions of three offices are most relevant to capital supervision of the FHLBanks. The primary responsibility of the Office of Supervision is to ensure the safety and soundness and mission-compliance of the FHLBanks; it conducts the federally mandated annual examinations of all FHLBanks. The Office of Policy and Office of General Counsel provide assistance to and share oversight responsibility with the Office of Supervision. These three offices have about 54 employees, of which 14 are in the Office of Supervision. The Federal Housing Enterprises Financial Safety and Soundness Act of 1992 (the 1992 act) established OFHEO as an independent regulator within the Department of Housing and Urban Development (HUD) whose mission is to help ensure the enterprises’ safety and soundness. Under the 1992 act, OFHEO’s director has independent authority pertaining to matters of safety and soundness. OFHEO’s primary means for fulfilling its mission are establishing capital standards for the enterprises and conducting on- site examinations to assess their management practices and financial condition. OFHEO has about 87 full-time equivalent employees and an annual budget of about $20 million. OFHEO’s expenses are funded with assessments on the enterprises. However, unlike FHFB, OFHEO is subject to the annual appropriations process. This appendix provides basic financial information on the FHLBank System, Fannie Mae, and Freddie Mac. Table 2 is a consolidated summary balance sheet of the FHLBank System. Table 3 presents information on the advances and total assets of each FHLBank as of December 31, 2000. Tables 4 and 5 provide selected financial highlights for Fannie Mae and Freddie Mac. As indicated in table 2, the FHLBank System has grown substantially over the past 5 years. The total assets in the FHLBank System increased 124 percent between December 31, 1996 and December 31, 2000; and advances increased 171 percent over the same time period. At the end of 2000, the assets in the FHLBank System totaled nearly $654 billion. In comparison, the assets of Fannie Mae and Freddie Mac totaled $675 billion and $459 billion, respectively. (See tables 4 and 5.) Table 3 presents the level of advances and total assets at each FHLBank at the end of 2000. The FHLBanks vary significantly in size. Total assets ranged from about $27 billion at the FHLBank of Topeka to $140 billion at the FHLBank of San Francisco. The amount of advances outstanding ranged from about $18 billion to $110 billion at the same FHLBanks. The percentage of total assets made up of advances also varied among the FHLBanks. At the FHLBank of Chicago, advances made up only 52 percent of total assets, while at the FHLBank of San Francisco, advances made up 78 percent of assets. Other assets at FHLBanks may include cash or investments such as U.S. government-agency securities or high-quality, short-term investments like federal funds sold, certificates of deposit, and commercial paper. As shown in Tables 4 and 5, Fannie Mae and Freddie Mac have also grown substantially over the past 5 years. Fannie Mae’s total assets increased 92 percent between December 31, 1996 and December 31, 2000; while Freddie Mac’s assets increased 164 percent over the same time period. Their off- balance sheet obligations also increased. For example, Fannie Mae’s outstanding net MBSs increased 29 percent from $548 billion in 1996 to $706 billion at the end of 2000. Freddie Mac’s participation certificates (PC) increased 22 percent from $473 billion to $576 billion. This appendix summarizes FHFB’s and OFHEO’s risk-based capital requirements for the FHLBanks and the enterprises, respectively. FHFB’s risk-based capital requirements are meant to ensure that the FHLBanks maintain sufficient capital to weather stressful economic conditions. The requirements address credit, interest rate, and operations risks. FHFB’s capital requirements separate FHLBank assets and positions into four credit risk categories and establish capital levels within these categories. The four categories are (1) advances, (2) rated mortgage assets, (3) rated assets and positions other than advances or mortgages, and (4) unrated assets. For the first three categories, maturity and/or a credit rating from a nationally recognized credit rating agency are the factors determining the capital charge for an asset or position. Longer terms to maturity and lower credit ratings increase the capital requirement because they tend to increase credit risk. All unrated items have an 8- percent capital requirement, except for cash, which has a zero capital requirement. The capital requirements extend to off-balance sheet items; also credit enhancements such as guarantees can reduce the credit requirements, if the providers have credit ratings superior to that of the FHLBank asset or position. Although FHLBanks have never incurred credit losses on advances backed by traditional mortgage collateral or securities, FHFB decided to impose capital requirements on all advances, including short-term advances. FHFB’s requirement assumes that advances will exhibit the same losses as the highest investment grade (triple-A) corporate bonds and that advances would have a recovery rate of 90 percent. FHFB stated this recovery rate is consistent with overcollateralization and other protections afforded advances. Additionally, longer term advances have higher capital requirements, because risks tend to increase with terms to maturity. Even though traditional advances have little credit risk, FHFB recognized that new expanded collateral available to support advances may have greater credit risk. As a result, it set a capital requirement for advances that includes some credit risk. The expanded collateral includes real estate related collateral, such as commercial mortgages and home equity lines of credit, as well as nonmortgage agricultural loans and small business loans. Because of the unknown risk created by new types of collateral, FHFB used its judgment to set the capital requirement on all advances. For example, advances with less than 4 years maturity have a 7 basis point capital requirement even though FHFB had calculated the appropriate capital requirement to be 0 basis points. This imposition of 7 basis points reflects, in part, concerns about potential credit risks in the new types of collateral. In contrast, when the term to maturity on advances exceeds 10 years, the capital requirement is 35 basis points. To ensure sufficient collateral protection is available against advances, the extent of overcollateralization for different assets varies. Overcollateralization is the extent to which the book value of collateral exceeds the book value of the advances it secures. Overcollateralization increases for riskier assets. FHFB expects FHLBanks to determine the appropriate level of overcollateralization to be imposed on nontraditional collateral permitted by GLBA. During the regular examination of FHLBanks, FHFB will examine the amount of overcollateralization required by the FHLBanks for different assets, if they permit nontraditional collateral to back advances. Based on FHFB’s supervision and examination approach to collateral policies, the risk-based capital regulation assumes that credit risk is equalized across all advances. The credit risk requirements for residential mortgage assets was based on credit ratings by major credit rating agencies. When developing the capital requirements for mortgage assets, FHFB also took into account the requirements set by other regulators. In general, the risk-based capital requirements for mortgage assets, such as mortgages on both single-family and multifamily units or MBSs, vary with the creditworthiness of the assets. The final rule is based on the assumption that the collateral underlying the residential mortgage assets will typically consist of conforming, prime quality loans with loan-to-value ratios below 80 percent as well as loans with higher loan-to-value ratios with appropriate mortgage insurance. FHFB also assumes that the performance of any credit enhancement is reasonably ensured in all relevant economic stress scenarios and that the FHLBank’s portfolios of residential mortgage assets will have appropriate diversification and that credit enhancements will take account of any geographic or other concentrations that increase credit risk. Based on the above constraints, FHFB assigned credit risk requirements. For example, for unsubordinated residential mortgage assets in the highest investment grade—triple A—residential mortgage assets have a 37 basis point capital requirement; unsubordinated mortgage assets in the second investment grade—double A—have 60 basis points capital requirement, and unsubordinated mortgage assets in the fourth highest investment grade—triple B—have a 120 basis points capital requirement. In contrast, subordinated residential mortgage assets with ratings below triple-A can have higher capital requirements. For example, subordinated residential mortgage assets with a triple-B rating have a 445 basis point capital requirement. Risk-based capital requirements are set on residential mortgages assets acquired by the FHLBank, where the FHLBank and the member selling the mortgage asset share credit risk as is the case in MPF and MPP. To date, participating FHLBanks have required the equivalent of a double-A on each residential mortgage asset acquired based on a model created by S&P. These mortgage assets have a 60 basis points capital requirement— the requirement for any double-A rated residential mortgage asset. Mortgage assets, where credit-risk is shared with members, are expected to become an increasing part of the assets held by the FHLBanks. FHFB has established risk-based capital requirements for assets other than advances or mortgages that are also rated. Risk-based capital requirements for such assets increase with decreasing creditworthiness and increasing terms to maturity. For example, U.S. securities of any maturity have 0 basis point capital requirement while for triple-A rated corporate assets the requirement ranges from 15 basis points to 220 basis points, with the requirement increasing with an increasing term to maturity. Lower rated assets carry a 100-percent capital requirement. Capital requirements for unrated assets are set according to type of asset. This category includes cash, premises and equipment, and investment assets that have not received ratings from the major rating agencies. Cash has a zero capital requirement, while premises and equipment have an 8- percent capital requirement. FHFB has assigned an 8-percent capital requirement to all investment assets that are unrated. This is the same as the requirement that the Basel Committee of Bank Supervisors assigns to unrated assets in its proposed revision of the bank capital standards. Risk-based capital requirements are also established for off-balance sheet assets such as commitments to purchase loans and standby letters of credit. The risk-based capital rule establishes credit conversion factors that convert off-balance sheet positions into asset equivalents. Each position is multiplied by its credit conversion factor, measured as a percent, to obtain the nominal value needed to determine the credit risk capital requirement. Risk-based capital requirements for derivatives are based on their current and potential risks and vary by type of derivative and term to maturity. Potential future risk exposures can be determined from a table in the regulation or a FHFB approved internal model. For example, in the table, interest rate derivative contracts with a term less than 1 year have a conversion factor of 0 percent, while for equities, the conversion factor is 6 percent. When the term exceeds 5 years, the conversion factor for interest rate derivative contracts is 1.5 percent, and the conversion factor for equities is 10 percent. The final regulation also establishes procedures to address the effects of multiple derivatives between two parties. The FHFB’s capital requirements can reflect credit enhancements such as third-party guarantees of an asset held by a FHLBank. If the credit enhancement or its provider has a rating from a major rating agency, the capital requirement will accord with the enhancement, if the FHLBank asset is lower rated or unrated. The risk-based capital regulation requires each FHLBank to hold capital for interest rate risk equal to the sum of two calculations. One calculation estimates the potential losses in the FHLBank’s portfolio under parameters specified by FHFB. The other measure is the amount by which the market value of total capital falls short of the adjusted book value of capital, in the event that the market value of capital is below this accounting benchmark. FHFB prefers that the internal models be based on a value at riskapproach, which estimates level of capital that will prove sufficient to absorb losses in all but the worst 1 percent of the time. In a value at risk approach, the loss is estimated based on alternative possible interest rate patterns over the chosen time period. However, if approved by FHFB, a cash flow model can be used by a FHLBank as an alternative to a value at risk approach. When estimating interest rate risk and calculating capital required, each FHLBank is required to have sufficient permanent capital to meet the value at the risk level established by FHFB. The exposure to interest rate risk in each model is to depend on the level of stress from interest rate movements and any hedges used which affect the actual exposure to interest rate movements. These internal models must meet FHFB’s technical restrictions and use interest rate stress scenarios approved by FHFB. Additionally, added permanent capital will be required if the FHLBank’s current market value of total capital, based on the estimated market value of assets minus market value of liabilities, at the time of the capital requirement analysis, is less than 85 percent of the FHLBank’s book value of capital. The added capital will be the difference between the market value of the capital and 85 percent of the book value of the FHLBank’s capital. This requirement was implemented because FHFB was concerned that the book value of capital might not adequately reflect the economic value of capital in some cases. This requirement forces the capital available to cover interest rate risk to have a market or economic value of at least 85 percent of the book capital value. This requirement is consistent with a value at risk approach, which calculates the market value of capital available under different economic stresses. FHFB also established technical restrictions on how the internal value at risk model was to be designed in its risk-based capital regulation. FHFB required that the probability of a loss greater than the estimate of the market value of the bank’s portfolio at risk shall not exceed 1 percent.Thus, the estimated net market value of the portfolio will cover estimated losses 99 percent of the time. In the regulation, FHFB directed each FHLBank to assume a stress period of 120-business days, based on historic interest rates from 1978 to 1 month before the capital requirement is calculated. FHFB stated that the periods chosen should be representative of the periods of greatest potential stress in the market given the FHLBank’s portfolio. FHFB officials told us that the 120-day periods will overlap. A new period will start at the first of each month since 1978. This provides about 270 periods for the analysis. In a value at risk analysis with a 1 percent confidence interval, this means capital required for interest rate risk will be sufficient to cover estimated losses in 267 out of a total of 270 stress periods. FHFB directed each FHLBank to develop a model that is comprehensive given the FHLBank’s capabilities. In addition, FHFB stated that the internal models may incorporate empirical correlations among interest rates or other market prices. Lastly, FHFB required that the model be independently validated and satisfactory to FHFB. Although GLBA did not require FHFB to establish a risk-based capital requirement to cover operations risk, FHFB decided such a requirement was needed. FHFB’s capital requirement for operations risk is 30 percent of the total capital required to cover interest rate and credit risk, but it may be reduced to no lower than 10 percent if a FHLBank can demonstrate to the satisfaction of FHFB that it has insurance or some other means to justify the reduction. OFHEO’s risk-based capital requirements are meant to ensure that the enterprises maintain sufficient capital to weather stressful economic conditions. These requirements also address credit, interest rate, and operations risks. OFHEO has developed its own cash flow model to estimate risks and calculate total capital needed to cover credit and interest rate risk. OFHEO runs a single model in which the capital calculations for credit risk and interest rate risk are based on the model’s calculation of how much capital is needed by each enterprise. To determine credit risks the model must include information on housing prices, vacancies and credit enhancements, as well as other variables that affect credit risk. To determine interest rate risk the model must include information on interest rates, interest rate hedges and other variables that affect interest rate risk. The purpose of OFHEO’s stress test is to calculate whether sufficient capital was set aside at the beginning of the 10-year stress test period to cover all benchmark losses and interest rate stress losses and to leave the enterprise with a positive capital amount in each accounting period and at the end of the stress period. Once the capital needed for credit and interest rate risk is calculated in the stress test, total required capital is the sum of capital for interest rate risk and credit risk plus 30 percent of this sum to cover operations risk. The intent in integrating the stresses for credit risk and interest rate risk is to permit the OFHEO model to better deal with feedbacks between interest rate movements and losses due to credit risk. For example, when interest rates fall, prepayments accelerate, and this leads to a decline in the value of mortgages on the balance sheets of each enterprise. At the same time, the level of credit risk in the remaining mortgages may increase if borrowers with poorer credit ratings cannot prepay. In addition, other factors such as the recent history of interest rates and the number of mortgages at different interest rates may interact with declining rates to affect prepayments. Consequently, the cash flow model can only calculate credit risk changes due to prepayments, if the values of all variables that affect prepayments and credit risk are fully specified in the model. To fully understand how interest rate risk and credit risk interact, a modeler would have to test different mixes of input variables, including interest rate changes. However, the accuracy of any feedbacks found in the model would depend on the quality of the model and how well it specified the underlying economic relationships that create losses due to interest rate movements and defaults. The credit stress, during the stress period, is specified in the 1992 act. The benchmark loss for credit risk is the “worst cumulative credit losses for 2 consecutive years in contiguous states encompassing at least 5 percent of the U.S. population”. The actual area chosen by OFHEO to create benchmark credit losses is Arkansas, Louisiana, Mississippi, and Oklahoma, in 1983 and 1984. OFHEO determined the factors or input variables that affected losses and prepayments due to credit stress. To identify the input variables, it reviewed the available literature on defaults and modeled defaults separately for single family and multifamily mortgages as well as other assets held by the housing enterprises. To actually estimate potential losses due to credit risk, OFHEO created numerous asset classifications based on factors such as: single-family or multifamily; loan-to-value ratio; retained in portfolio or in MBS; type of recourse available; fixed or variable rate mortgage; conventional, FHA, or VA mortgages; interest rate at origination; and origination date. Given these characteristics, each loan is placed in a loan group, which determines its expected default loss. Credit enhancements can affect default losses in OFHEO’s model, but the credit risk of the credit enhancer is also taken into account. Similar classification schemes are developed for other assets. Given this level of detail, OFHEO was able to create a finely granulated sense of what creates losses and what credit losses would occur during the stress test. The 1992 act, which created OFHEO, established criteria for the size of the interest rate shocks the enterprises are required to withstand over the 10- year stress period. The criteria was based on a 10-year stress period for both an increasing rate and decreasing rate environment that could affect losses for an enterprise. In both environments, the rates move during the first year and stay constant for the rest of the 10-year period. The act specifies that capital must be sufficient to cover the more stressful of the two interest rate environments. (See fig. 5 for a detailed enumeration of the interest rate environments that the 1992 act required OFHEO to use.) According to the 1992 act, OFHEO must assume that the enterprises acquire no new mortgages other than those deliverable under existing commitments at the beginning of the 10-year stress period. This approach focuses on the risks embedded in the book of business that existed at the beginning of the stress test period. This restriction on new business forces the model to act as if the enterprises are winding down their business during the stress period. Operations risk is also specified in the 1992 act and is equal to 30 percent of the sum of interest rate risk and credit risk. Consequently, total capital requirement for the enterprises for risk-based capital is always equal to 130 percent of the sum of capital needed to cover interest rate and credit risk. Comparison of Financial Institution Regulators’ Enforcement and Prompt Corrective Action Authorities (GAO-01-322R, Jan. 31, 2001). Capital Structure of the Federal Home Loan Bank System (GAO/GGD-99- 177R, Aug. 31, 1999). Farmer Mac: Revised Charter Enhances Secondary Market Activity, but Growth Depends on Various Factors (GAO/GGD-99-85, May 21, 1999). Federal Housing Finance Board: Actions Needed to Improve Regulatory Oversight (GAO/GGD-98-203, Sept. 18, 1998). Federal Housing Enterprises: HUD’s Mission Oversight Needs to Be Strengthened (GAO/GGD-98-173, July 28, 1998). Government-Sponsored Enterprises: Federal Oversight Needed for Nonmortgage Investments (GAO/GGD-98-48, Mar. 11, 1998). Federal Housing Enterprises: OFHEO Faces Challenges in Implementing a Comprehensive Oversight Program (GAO/GGD-98-6, Oct. 22, 1997). Government-Sponsored Enterprises: Advantages and Disadvantages of Creating a Single Housing GSE Regulator (GAO/GGD-97-139, July 9, 1997). Housing Enterprises: Investment, Authority, Policies, and Practices (GAO/GGD-97-137R, June 27, 1997). Comments on “The Enterprise Resource Bank Act of 1996” (GAO/GGD- 96-140R, June 27, 1996). Housing Enterprises: Potential Impacts of Severing Government Sponsorship (GAO/GGD-96-120, May 13, 1996). Letter from James L. Bothwell, Director, Financial Institutions and Markets Issues, GAO, to the Honorable James A. Leach, Chairman, Committee on Banking and Financial Services, U.S. House of Representatives, Re: GAO’s views on the “Federal Home Loan Bank System Modernization Act of 1995” (B-260498, Oct. 11, 1995). FHLBank System: Reforms Needed to Promote Its Safety, Soundness, and Effectiveness (GAO/T-GGD-95-244, Sept. 27, 1995). Housing Finance: Improving the Federal Home Loan Bank System’s Affordable Housing Program (GAO/RCED-95-82, June 9, 1995). Government-Sponsored Enterprises: Development of the Federal Housing Enterprise Financial Regulator (GAO/GGD-95-123, May 30, 1995). Farm Credit System: Repayment of Federal Assistance and Competitive Position (GAO/GGD-94-39, Mar. 10, 1994). Farm Credit System: Farm Credit Administration Effectively Addresses Identified Problems (GAO/GGD-94-14, Jan. 7, 1994). Federal Home Loan Bank System: Reforms Needed to Promote Its Safety, Soundness, and Effectiveness (GAO/GGD-94-38, Dec. 8, 1993). Improved Regulatory Structure and Minimum Capital Standards are Needed for Government-Sponsored Enterprises (GAO/T-GGD-91-41, June 11, 1991). Government-Sponsored Enterprises: A Framework for Limiting the Government’s Exposure to Risks (GAO/GGD-91-90, May 22, 1991). Government-Sponsored Enterprises: The Government’s Exposure to Risks (GAO/GGD-90-97, Aug. 15, 1990). | The Federal Home Loan Bank (FHLBank) System is establishing a new capital structure that, if properly implemented, is likely to be an improvement over the historic structure. Capital will become more permanent, and new risk-based and leverage capital requirements will also be implemented. The new capital structure has the potential to address the risks associated with advances as well as the direct acquisition of mortgages. However, it is too early to assess the overall adequacy of the structure. So far, direct acquisition appears to provide regional diversification of mortgage acquisitions and incentives to member institutions for sound mortgage underwriting and servicing through the sharing of credit risks. However, risks could be affected if changes are made in the level of mortgage acquisition activity and in the risk-sharing agreements between the FHLBanks and their member institutions. Such changes might also increase the importance of risk-based capital requirements compared to the leverage requirements of the Federal Housing Finance Board (FHFB). Risks in the FHLBank System will increase because of expanded collateral provisions in the Gramm-Leach-Bliley Act and direct mortgage acquisition activity. Mitigation of that risk will depend on risk management by the FHLBanks, the adequacy of capital structure, and oversight by FHFB. In addition to the FHLBanks, the acquisition activity could also generate additional risks for the enterprises. Although the FHLBank System and the enterprises primarily engage in different business activities, these differences may decrease if direct mortgage acquisition activity grows dramatically. Having one housing government sponsored enterprise (GSE) regulator for safety and soundness and mission compliance would provide greater independence and objectivity, greater prominence, improved ability to assess the competitive impact of new initiatives on all housing GSEs, and improved ability to ensure consistency of regulation of GSEs that operate in similar markets. |
DOD spends billions of dollars annually to maintain complex weapon systems including aircraft, ships, ground-based systems, missiles, communications equipment, and other types of electronic equipment that require regular and emergency maintenance to support national security goals. Maintenance of this equipment is divided into three levels corresponding to the extent and complexity of these repairs—depot-level, intermediate, and organizational. DOD defines depot maintenance as the highest level of maintenance and it generally refers to major maintenance and repairs, such as overhauling, upgrading, or rebuilding parts, assemblies, or subassemblies. This level of maintenance can consist of repair to entire weapon systems, major assemblies that comprise a system, or the components that make up those assemblies. Depot maintenance also includes installation of system modifications that extend the operational life of weapon systems. Such repairs and overhauls have long been provided by DOD maintenance personnel, private contractors, or a mixture of the two through public-private partnerships performed at government-owned and private facilities. Intermediate maintenance consists of repair capabilities possessed by operating units and in-theater sustainment organizations that include remove-and-replace operations for subcomponents, local manufacture, and other repair capabilities. Organizational maintenance consists of the tasks necessary for day-to- day operation including inspection and servicing. The department’s overarching acquisition guidance, DOD Directive 5000.01, states that the program manager shall be the single point of accountability for accomplishing program objectives for total life-cycle systems management, including sustainment. DOD Instruction 5000.02, which provides additional DOD guidance for managing and overseeing defense acquisition programs, requires that program managers perform a core logistics analysis to support major acquisition milestone reviews after the technology or system development phase. Such logistics considerations, to include those related to maintenance, are contained within the life-cycle sustainment plan that was, until recently, reviewed as part of the acquisition strategy for major weapon system programs. In April 2011, DOD directed that the life-cycle sustainment plan be reviewed separately from the acquisition strategy and, in September 2011, directed that those sustainment plans associated with certain major weapon systems be approved by the Assistant Secretary of Defense for Logistics and Materiel Readiness at all milestone decision points during weapon system development and at the full-rate production decision.DOD has established a new template for the plans’ content to include the extent to which contractor services will support maintenance. DOD has issued instructions that provide guidance to the military departments and program offices on defining maintenance requirements and approaches. For example, DOD Directive 4151.18 requires that the source of depot-level repair for major weapon systems be determined during the weapon system’s development. It also provides instruction on determining if depot-level maintenance for a weapon system will be performed at a government-owned and government-operated (hereinafter referred to as “organic”) depot, by a private-sector contractor, or some combination of the two. Section 2466 of Title 10 of the U.S. Code places limitations, however, on contracted depot-level maintenance of materiel. The statute provides that not more than 50 percent of funds made available in a fiscal year for depot-level maintenance and repair may be used for contracted services. This is known as the 50/50 requirement. DOD is also required to report annually on past and projected workload allocations. DOD Directive 4151.18 requires that USD(AT&L) monitor compliance with the directive and review the adequacy of DOD maintenance programs and resources. Additionally, it requires DOD components develop tools and management procedures to implement the content of the directive. Additionally, DOD Instruction 4151.20 provides instruction for determining “core” maintenance requirements as defined in Section 2464 of Title 10 of the U.S. Code. the national defense and require that DOD maintain a logistics capability that is government-owned and government-operated to ensure DOD can effectively respond to a mobilization, national defense contingency situations, and other emergency requirements in a timely manner. To ensure that life-cycle sustainment planning is done early in a weapon system’s development phase, the National Defense Authorization Act for fiscal year 2012 revised the assessment of core maintenance requirements and directed DOD to identify such requirements at acquisition milestones. Department of Defense Instruction 4151.20, Depot Maintenance Core Capabilities Determination Process, January 5, 2007. example, Congress passed the Weapon System Acquisition Reform Act of 2009, requiring DOD to ensure competition or the option of competition throughout a weapon system program’s life cycle, in part, by requiring DOD to consider purchase of complete technical data packages when cost-effective. In May 2011, however, we reported that DOD continues to face challenges that could undermine competition of maintenance contracts, including shortcomings in how programs’ technical data rights requirements that are necessary for competition are determined. We recommended, and DOD agreed, that the department should update its acquisition and procurement policies to clarify requirements for documenting technical data requirements and to issue instructions for program managers to use when conducting analyses used to determine technical data rights needs for a weapon program. Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics, Better Buying Power: Guidance for Obtaining Greater Efficiency and Productivity in Defense Spending, September 14, 2010. to (1) conduct a business-case analysis that outlines the technical data rights the government will pursue to ensure competition and (2) include the results of this analysis in acquisition strategies at a program’s entrance into the engineering and manufacturing development phase of the acquisition. At the departmental level, neither DOD nor the individual military departments know the extent to which weapon system programs rely on long-term maintenance contracts. This includes the most basic information—how many of such contracts are currently in use. DOD does not collect or maintain such information during its reviews of acquisition strategies or life-cycle sustainment plans, nor do existing data collection systems provide the type of information needed to do so. Consequently, we worked with the military departments to identify a number of long-term maintenance contracts and selected 10 contracts supporting seven major weapon systems for detailed review. We found that these contracts varied widely in terms of breadth of requirements, potential period of performance, and value. For example, our work found that these contracts could extend up to 22 years if the contractor meets performance criteria and earns award terms. These contracts also constituted a significant investment for the government. Program offices reported obligations of over $18.4 billion on these 10 contracts through the end of fiscal year 2011. In that fiscal year alone, programs obligated nearly $1.7 billion on the 10 contracts we reviewed. DOD was unable to provide us a list of ongoing long-term maintenance contracts. Further, DOD officials noted that existing reports and data collection systems do not provide the department information on the use of long-term maintenance contracts. For example, USD(AT&L) reports to Congress annually on the percentage of funds expended during the preceding fiscal year for public and private maintenance and repair activities, and project funding requirements for the current and ensuing fiscal year.on the distribution of these contracts among the department’s weapon system programs, the total number of contracts used, or the length of performance of these contracts in these reports. Similarly, USD(AT&L) officials noted that while they have used FPDS-NG to perform contract spend analysis for various categories of services, including maintenance services, FPDS-NG does not record the potential period of performance for all contracts, including those that use incentives that may extend the life of the contract. Additionally, while some contract actions associated with maintenance are coded as such in FPDS-NG, our analysis found that other maintenance-related activities may be reported as management support, logistics support, and system engineering services. Further, we found that the Defense Acquisition Management Information Retrieval System, DOD’s web-based data system that tracks programmatic information on major defense acquisition programs, did not contain accurate information on what major weapon systems were currently fielded and are being maintained. However, USD(AT&L) is not required to include information DOD’s limited visibility over long-term maintenance contracts reflects broader DOD challenges with managing services acquisition. Over the past decade, our work has identified the need for DOD to obtain better data on its contracted services to enable it to make more strategic decisions. For example, in 2006, we reported that DOD’s approach to managing services acquisition tended to be reactive and had not fully addressed the key factors for success at either a strategic or transactional The strategic level is where the enterprise sets a direction for what level. it needs, captures knowledge to make informed management decisions, ensures departmentwide goals and objectives are achieved, and assesses the resources it has to achieve desired outcomes. The strategic level sets the context for the transactional level, where the focus is on making sound decisions on individual service acquisition using valid and well-defined requirements, appropriate business arrangements, and adequate management of contractor performance. GAO, Defense Acquisitions: Tailored Approach Needed to Improve Service Acquisition Outcomes, GAO-07-20 (Washington, D.C.: Nov. 9, 2006). Our prior work has shown, however, that while DOD obtains insight into individual programs through various program reviews, DOD does not collect or maintain that information to inform strategic decisions. For example: In response to congressional direction, DOD and the military departments have established procedures for reviewing, approving and monitoring services acquisitions, including those for maintenance. Further, since 2006, all proposed services acquisitions with a value estimated at more than $1 billion or designated as “special interest” are reviewed by USD(AT&L), while military department or other defense component officials review acquisition strategies for those below this threshold. Contract requirements, risks, and business arrangements are among the items included in reviewed acquisition strategies. Though these reviews take place, DOD does not collect or aggregate the information they produce to provide department-wide insight into the use of long-term maintenance contracts. Additionally, to improve DOD’s services acquisition process, USD(AT&L) implemented an independent management review, or peer review, process for its service contracts in 2008. Occurring after approval of the acquisition strategy, these peer reviews are conducted prior to and after award of services contracts, and are published to facilitate cross-sharing of best practices and lessons learned on various contracting issues, including the use of competition, contract structure and type, definition of contract requirements, and cost or pricing methods. Each of these reviews provides for the discussion of issues related to contracting strategy, but DOD officials noted that they do not collect or maintain information on what type of contracting approach was used to acquire all services that support DOD weapon systems. Further, while DOD collects and makes available lessons learned from these reviews in areas such as source selection and use of incentives, DOD officials stated that the process has not resulted in lessons learned or best practices specific to the use of long-term maintenance contracts. Similarly, DOD policy and guidance require USD(AT&L) and military department senior acquisition executives approve acquisition strategies and life-cycle sustainment plans during program milestone reviews. Each of these documents is to include information on the proposed acquisition approach, including the use of contractor support. Our discussions with USD(AT&L) and representatives from the military department offices responsible for reviewing these plans found these offices do not maintain information on the extent to which long-term maintenance contracts are used by weapon system programs. In the absence of department-wide data on the use of long-term maintenance contracts, we selected 10 long-term maintenance contracts that supported seven major weapon systems. We found that these contracts varied widely in terms of breadth of requirements, potential period of performance, and value. For example, the contracts we reviewed included those that supported maintenance of an entire weapon system platform such the Air Force’s Joint Surveillance Target Attack Radar System (JSTARS) to more specific depot-level maintenance support activities for system components, such as the Navy’s T-45 engine contract. Table 1 shows selected characteristics of the 10 contracts we reviewed. In addition to maintenance activities, the contracts we reviewed also provide supply chain management, technical data management, training, equipment configuration management, and engineering support, among other requirements. Further, we found that long-term maintenance contracts could extend up to 22 years if the contractor meets performance criteria and earns award terms. Lastly, we found that these contracts constituted a significant investment for the government, as program offices reported obligations of over $18.4 billion on these contracts through the end of fiscal year 2011. In that fiscal year alone, programs obligated nearly $1.7 billion on the 10 contracts we reviewed. DOD officials noted that although long-term contracts can encourage contractors to invest in new facilities, equipment, and processes to support depot-level maintenance, such contracts may hinder the government’s ability to appropriately incentivize the contractor’s performance and control costs. DOD officials noted that the department is pursuing a number of initiatives that could potentially improve DOD’s insight into long-term maintenance contracts and their management. For example, USD(AT&L) officials pointed out that the department is creating a standalone instruction for service acquisitions, based on DOD Instruction 5000.02. Although the instruction is in the early stages of development, USD(AT&L) officials said that it will provide more detailed guidance for the acquisition of specific services and reflect issues such as duration that have been raised in recent DOD guidance. USD(AT&L) officials said that the department is currently considering expanding or updating the Defense Acquisition Management Information Retrieval system to retain contract information for major service contracts, such as contractors’ performance histories, contract lengths, contract types, and incentives used for these services. Decisions made early in the acquisition process can limit DOD’s ability to select alternative maintenance providers over the life cycle of a weapon system program. Program officials believed they could select an alternative service provider in the future for 5 of the 10 contracts we reviewed, but the degree to which the government obtained access to technical data would be an obstacle in doing so for the other half. DOD has updated its policies to emphasize determining technical data needs earlier in the acquisition life cycle. Information we collected on eight weapon system programs in development or early stages of production that were reviewed by USD(AT&L) between October 2010 and October 2011 indicated that at least half have acquired or plan to acquire sufficient technical data to compete maintenance services or to perform maintenance with organic depot personnel should the need arise. The programs, however, had yet to determine the extent to which they will acquire these data or the cost to do so. DOD program officials said that decisions made early in the acquisition cycle, especially with regard to acquiring technical data, may hinder the department’s ability to change maintenance service providers for depot- level activities. As we reported in May 2011, technical data can enable the government to complete maintenance work in-house, as well as to competitively award contracts for the acquisition and sustainment of a weapon system. More recently, we reported that for contracts pertaining to DOD weapon programs, which can involve products as well as support services, the lack of access to proprietary technical data and a heavy reliance on specific contractors for expertise limits or even precludes the possibility of competition. Even when access to technical data is not an issue, the government may have little choice other than to rely on the contractors that were the original equipment manufacturers, and that, in Of the ten some cases, designed and developed the weapon system.contracts we reviewed only three were competitively awarded. Table 2 summarizes the impact of technical data access on DOD programs’ ability to select alternate services providers for maintenance on the contracts we reviewed. DOD acquired technical data sufficient to potentially select an alternative service provider—either by transitioning contracted maintenance work to an organic depot or recompeting maintenance contracts—for 5 of the 10 maintenance contracts we reviewed. Three of these programs had sufficient access to technical data to perform maintenance services organically. For example, Depot maintenance for the AH-64 and CH-47 helicopter airframe components was already performed organically at the Corpus Christi Army Depot prior to the use of contractor support. However, the program determined that contractor support could improve its maintenance practices and the availability of components. While government personnel continue to do all maintenance work on airframe components, since 2004 the Army has used a contractor to provide parts integration, technical engineering and logistics support which has significantly increased system availability. As a result of a 1995 Base Realignment and Closure decision, the military depot that maintained the T56 engines for the C-130 program was closed. To mitigate the impact of the closing on the local community and employees, the maintenance workload was performed by the private sector at the same location. The Air Force used a public-private competition—an opportunity for the public and private offerors to compete for the work—to determine the most cost-effective source of repair, and the T56 engine maintenance is now provided by a contractor. Two other programs reported they are able to recompete maintenance services contracts even though neither program purchased complete technical data associated with the weapon system. According to program officials, they could compete contracts for maintenance services either because they acquired sufficient technical data for specific portions of the aircraft or because there was a competitive environment for maintenance services for commercially-derived systems. The latter are weapon systems that were adapted for military use from a commercial item as opposed to weapon systems developed for the military. For example, The Navy’s T-45 trainer aircraft program was designated to be maintained by contractors for the life of the program, as it is not a core asset and there was a competitive environment with multiple vendors to provide maintenance support for this commercially available aircraft. During development, the Navy purchased technical data for DOD-specific aspects of the plane’s airframe design, allowing the program office to recompete maintenance contracts throughout the life cycle of the system. Specifically, after the program split its system- level maintenance contract into separate engine and airframe contracts, it was able to leverage its access to technical data to competitively award the airframe contract. When the airframe contract was recompeted in 2007, five vendors submitted capability statements. Program officials told us that they expect a similar industry response when the contract is recompeted again this year. Similarly, for the KC-10 aircraft, the aircraft is based on a commercial design and uses contractor logistics support for maintenance services. The Air Force has competitively awarded five maintenance contracts since the KC-10 was acquired in 1978. The last competition occurred in 2010 and there were two proposals which resulted in the selection of a new contractor. For 5 of the 10 contracts, however, programs reported they could neither transition contracted maintenance services to an organic depot nor recompete the contract due to insufficient access to technical data as well as factors such as insufficient funding, staffing, and expertise in some cases. For example, According to JSTARS program officials, the Air Force currently cannot convert contracted maintenance work to an organic depot or recompete the work because it has insufficient access to technical data for the aircraft’s unique systems and equipment. Prior to awarding the current contract, the JSTARS program utilized 17 sustainment contracts with the government managing these contracts and performing some portions of maintenance organically. However, in September 2000, the Air Force noncompetitively awarded a contract so that a single contractor would be responsible for sustainment activities that were previously performed under contracts or by government personnel. Program officials said that when the Air Force took on the more limited role of oversight of the prime contractor, program staffing and expertise were reduced significantly. They added that, as a result, the program office currently lacks the engineers, equipment specialists, inventory managers, and other staff and skills needed to manage all sustainment activities if the requirements included in the current contract were to be performed by multiple service providers. Though previous models of the Air Force’s C-130 fleet are maintained organically, contractors developed the C-130J model (both the airframe and engine) as a commercial item and it was acquired by the Air Force without related technical data. As a result, the program office must acquire maintenance services for all components unique to this model of the aircraft from the original equipment manufacturers through contracts. Program officials noted that there is a requirement to eventually bring the aircraft maintenance to organic depots, but noted that even if it were able to acquire the necessary technical data, the program office would need substantial funding to develop capabilities at the organic depots. Recent acquisition reforms such as the Weapon System Acquisition Reform Act of 2009 and DOD’s recent initiatives seeking greater efficiency and cost savings in acquisitions have put greater emphasis on obtaining technical data rights and on maintaining competition throughout the life cycle of weapon systems. For example, Congress has required that DOD issue comprehensive guidance on life-cycle management, develop and implement product support strategies, and appoint product support managers for major weapon systems, while DOD’s September 2010 efficiency initiatives memorandum includes a requirement that each military department set rules for acquisition of technical data rights as part of a plan to improve competition. DOD has taken a number of actions, including revising its acquisition policy to ensure that technical data requirements are considered during the acquisition process at key milestones. More recently, DOD has drafted guidance for developing open systems architecture contracts. This guidance will provide additional information to program managers regarding purchase of technical data and planning for an open systems architecture that may allow for increased flexibility in maintenance and purchase of such data. Data we collected on eight DOD weapon systems currently in development or early stages of production that were reviewed by USD(AT&L) between October 2010 and October 2011 as part of an acquisition review indicates that the programs have considered maintenance and other sustainment issues when making decisions regarding technical data needs. Table 3 summarizes these eight programs’ plans to acquire access to technical data rights. For the eight programs we reviewed, at least four have acquired or plan to acquire sufficient data to compete maintenance services or to perform maintenance with organic depot personnel while others had yet to determine the extent to which they will acquire these data or the cost to do so. For example: The Navy acquired government purpose rights and unlimited technical data rights for over 95 percent of major components for the Littoral Combat Ship, according to program officials. They said that most of the depot-level maintenance on the Littoral Combat Ship is expected to be performed by the private sector, and the Navy reports that competitive environment should enhance the ability of the Navy to control life-cycle sustainment costs. The Air Force has begun to analyze components on the MQ-9 aircraft and to determine what technical data is required to maintain the aircraft, according to program officials. They told us they are performing a business case analysis that will determine if technical data should be acquired for approximately 600 aircraft parts and major air frame components, but only a small percentage of these components have been assessed through this process to date. The Army will assess the technical data needs to maintain specific system components for components of the MQ-1C Gray Eagle as a means of retaining flexibility of maintenance options during sustainment. According to Army officials, the sustainment plan calls for the current contracting arrangement to transition to a public-private partnership in the future. We previously reported that DOD program managers often opt to spend limited acquisition dollars on increased weapon system capability rather than on acquiring the rights to technical data, thus limiting their flexibility to perform maintenance work in house or to support the development of an alternative source should contractual arrangements fail.assesses and secures its rights for the use of technical data early in the weapon system acquisition process when it has the greatest leverage to negotiate, DOD may face later challenges in developing sustainment plans or changing these plans as necessary over the life cycle of its weapon systems. Delaying action in acquiring technical data rights can make these data cost-prohibitive or difficult to obtain later in a weapon system’s life cycle. Once the decision is made to use long-term contracts, DOD faces choices on how to best incentivize contractor performance and manage costs. Of the 10 contacts we reviewed, we found that DOD programs that used contracts extending longer than 5 years made frequent use of incentives to motivate performance and tools that provide insight into and control of costs. Program officials acknowledged, however, that in some instances incentive structures needed to be periodically revised to better incentivize contractor performance and they may not have sufficient insight on contractor costs. Program offices using contracts lasting 5 years, on the other hand, made less use of incentives and generally did not have the ability to renegotiate contract prices, but believed that the shorter-term nature of the contracts mitigated some of their risks. Further, program offices now obtain incurred cost data for two contracts, which they expect will help in the negotiation of better contract prices. The various contract lengths, incentives and cost-control tools across the programs we reviewed reflects the differences of each acquisition and the mission-specific maintenance approaches taken to support each weapon system, but the department has not collected information on their effectiveness on long-term maintenance contracts. Of the programs we reviewed, we found that the Air Force awarded five relatively longer-term contracts—between 9 and 22 years—that incentivized contractor performance and attempted to gain insight into and control costs in various ways. All five of these contracts used some combination of monetary or contract term incentives to encourage contractor performance. These programs varied, however, in terms of the approaches used to gain insights into the contractors’ costs. For example, the JSTARS program used cost-based incentive metrics, scheduled specific opportunities to renegotiate the contract’s price, and received incurred cost data. In contrast, the contract to maintain the C-130’s T56 engine did not use any of these approaches to gain cost insight. Table 4 summarizes the incentives and tools used to gain cost insight and cost control. Program offices can use incentives to motivate contractors to provide exceptional levels of contract performance. Three longer-term contracts we reviewed include monetary incentives in the form of an award fee or an incentive fee, while three contracts use contract term incentives where a point system is used to award additional contract years. Program officials acknowledged that the incentives needed to be adjusted at times. For example, the JSTARS program uses an award fee incentive to motivate short-term contractor performance and an award term incentive to motivate the contractor’s long-term performance. Over the course of the JSTARS contract, the contractor has earned nearly all the available award fee and award term years despite some serious performance issues in 2009. In this case, the Air Force identified several serious maintenance failures, including the presence of foreign objects in engine filters and aircraft structural damage resulting from maintenance errors, that were caused by the JSTARS contractor and which could have resulted in serious personal injury and loss of aircraft. Because the incentive structure encompasses the broad range of responsibilities assigned to the contractor, the contractor still earned most of that evaluation period’s available fee and enough award term points to earn another year of contractor performance. The fee-determining official noted that if it were possible, he would have given the contractor a much lower award fee and rating. While the failures were reflected in the award fee evaluation under three performance metrics, the contractor’s aggregate performance against the remaining metrics allowed it to earn 90 percent of the eligible fee for this 2009 evaluation period. The JSTARS program subsequently amended its award fee plan to make the contractor ineligible for 40 percent of the award fee if its performance caused or contributed to a major accident. The contractor has earned at least 95 percent of the available award fee for every other evaluation period since the contract was awarded in 2000. Program offices structured contract term incentives differently, which provided DOD different degrees of flexibility to award additional years of performance. For example, the award term plans for the JSTARS and C-130 T56 engine contracts we reviewed guarantee additional years of work if contractors meet or exceed incentive metrics. Both the JSTARS and C-130 T56 engine contractors have earned the maximum number of possible award term years. Conversely, the current incentive option offered by the KC-10 program differs from the award terms used by the JSTARS or C-130 T56 contracts in key respects. The KC-10 program’s incentive includes “must-meet” metrics and a high degree of government discretion in awarding the additional incentive year. For example, even if the contractor meets all incentive metrics and earns the maximum available number of points needed to be considered for an additional incentive year, the program office can still decline to award the additional year. Additionally, if the contractor does not meet the standard set for any “must-meet” metric, it will not receive an incentive year. By structuring the incentive in this way, the program office mitigates the risk of the contractor earning incentives despite unsatisfactory performance, as in the previous JSTARS example. According to KC-10 officials, the contactor would not earn its first available incentive year with an approximate contract value of $450 million because it failed to provide continuous support for the initiation of global tanker support activities, a “must-meet” metric, among other performance shortcomings. Some of the programs that use longer-term contracts adjusted incentive metrics to influence contractor performance in areas needing improvement. For example, C-130J program officials said that since awarding the airframe maintenance contract in 2006, they gradually added more incentive metrics to the airframe contract’s award fee plan to incentivize contractor performance in other areas. After the contractor improved performance in providing engineering services, the program office added an incentive metric to improve the contractor’s performance for supply chain management. The programs using the five longer-term contracts we reviewed also use to varying degrees different tools to gain insight into and control costs over the term of the contracts, as illustrated by the following examples. The JSTARS cost-type contract was awarded on a non-competitive basis to the system’s original equipment manufacturer and the contractor bears little risk under this long-term arrangement, but the program office has taken measures to obtain insights into and control the contractor’s costs. The program office used the incurred cost data it receives under the cost-type contract to help renegotiate contract prices during triennial reviews. Additionally, the program uses cost- based incentive metrics to evaluate performance for award fee and award term determinations. For example, under the terms of the program’s January 2012 award fee plan, 10 percent of the contractor’s award fee is determined by tracking cost performance against contract estimates. This same metric is used to represent 10 percent of award term determinations. In addition, cost containment is also evaluated as part of a weapon system improvement metric that accounts for 37 percent of award term determinations. The C-130J program structured its potentially 10-year airframe and 9- year engine maintenance firm-fixed price contracts so that prices would be renegotiated at certain points during the contracts’ durations. For example, the program office receives incurred cost data for the airframe contract, and has renegotiated prices three times since the contract was awarded in 2006, with another renegotiation scheduled for January 2014. Program officials said that receiving incurred cost data helped them negotiate a 13 percent reduction in total contract costs during the last scheduled price renegotiation in January 2012. Program officials told us they can also gain insight into cost baselines through regular contractor performance monitoring and evaluation. For example, according to officials, the contractor supporting airframe maintenance used a new system to track parts that allowed for better utilization of spare parts and led to a decrease in hours needed to perform the contract requirement. Program officials were able to negotiate a lower price for that contract requirement during the next scheduled price negotiation. In contrast, neither the C-130 T56 engine contract nor the KC-10 program scheduled price renegotiations despite establishing firm-fixed prices for the entire potential length of the 15- and 9-year life of the respective contracts. For example, the C-130 T56 engine contract has prices fixed for the entire 15-year potential term of the contract with adjustments made for changes in best estimated quantities and for economic adjustments. Program officials expressed concern over their lack of insight into the contractor’s incurred costs and added that having such information, along with scheduled price renegotiations at the 5-year and 10-year points in the contract, would likely have been helpful in controlling maintenance costs. While KC-10 program officials cited the benefit of competition to drive down prices for maintenance services, USD(AT&L) officials indicated that proposed contracts reflecting a similar approach, where prices for the entire duration of a long-term contract are priced at award, would be reviewed carefully to ensure that the government’s interests were adequately protected. The Army and Navy programs we reviewed used contracts with a maximum length of five years and generally did not make as frequent use of incentives or cost-control tools as programs using longer contracts. Army and Navy program officials indicated that they would prefer to use longer contracts in the future to enable contractors to invest in support infrastructure and improvements. Table summarizes the incentives and tools used for cost insight and cost control. Across the five contracts with a maximum length of five years, three used monetary incentives and none used incentives that lengthen the contract’s term. The T-45 program office uses a performance bonus incentive, which allows the program to withhold monthly performance bonuses for contractor performance that did not meet or exceed thresholds on both incentive metrics. The program measures ready-for- training availability and the maintenance cancellation rate. The contractor must meet or exceed performance thresholds for these metrics at all three locations where the aircraft are based to receive an overall bonus. As a result, the contractor could lose as much as 65 percent of the available bonus by not meeting requirements at a single location. According to program officials, this incentivizes the contractor to perform optimally at all three locations. Performance records show that the contractor has earned most of the available bonus since the contract was awarded in 2008. Similarly, the contract for CH-47 engine maintenance support includes a clause which allows the contractor to earn an incentive fee for reducing engine repair turn-around time. Since the contract began in 2011, there has been one evaluation period; the contractor did not meet the incentive metric and did not earn any incentive fee. On the other hand, MH-60 program officials told us that incentives in the form of additional payments are not necessary for their program’s maintenance support contract. They added that the contractor is self-incentivized to maximize its profit in this firm-fixed price contracting arrangement, which can be achieved through realizing efficiencies. Furthermore, they questioned the value of paying a contractor to provide services above and beyond what the program requires. Instead, contract provisions allow the government to reduce the contractor’s payment if the contractor’s work does not meet minimum thresholds. MH-60 program officials reported that they have not had to make any downward price adjustments because the contractor is exceeding contract requirements. Programs are now receiving incurred cost data to control maintenance costs for two five-year, firm-fixed price contracts, though this approach was not used in previous contracts for the same services. Since 2009, the MH-60 program office has required the contractor to submit incurred cost data semiannually. Program officials said that they were directed by the Office of the Assistant Secretary of the Navy to request the contractor’s incurred cost data and were supported by USD(AT&L) in negotiating for it. By comparing incurred costs and contract prices, program officials said that they were able to negotiate more favorable prices for the 2011 follow- on maintenance support contract. During the previous contract, the contractor was able to realize efficiencies that drove down its incurred costs. With access to this information, the MH-60 program was able to re- baseline contract costs and negotiate lower prices to reflect these efficiencies. The AH-64 and CH-47 programs also receive incurred cost data. A May 2011 DOD Inspector General audit found that the AH-64 and CH-47 programs were paying above fair and reasonable prices for parts supplied through their 5-year maintenance support contract. The Inspector General reviewed costs for 24 high-dollar parts and calculated that the contractor charged the Army about $13 million more than the fair and reasonable prices for 18 of the parts. Based on this finding, these programs began reviewing incurred costs for the highest-value parts supplied through this contract. The incurred cost review is being performed in parallel with a major update of total parts pricing on the contract, and program officials expect that there will be many downward price changes as a result. The program office plans to perform this review annually over the term of the contract. DOD has not collected information concerning the effectiveness of the various incentives or cost-control tools used on long-term maintenance contracts, but it has recognized efforts made by individual programs to improve acquisitions of such services. For example, during a December 2010 peer review of the MH-60 airframe contract, USD(AT&L) officials noted that the use of incurred cost data allowed the program to negotiate lower prices for certain services. Program officials told us that it was difficult to negotiate for incurred cost data for fixed-priced contracts as contractors are generally reluctant to share their actual costs and seek to protect business-sensitive information. USD(AT&L) and military department officials told us that they are encouraging program officials to be more aggressive when asking for incurred costs, especially in situations where the government does not have the benefit of leveraging competition. DOD does not collect data on the extent to which long-term contracts are currently used and managed, but our assessment of 10 contracts shows the value of having such information. Decisions made early in the acquisition cycle, and in particular, whether DOD will buy the rights to technical data are critical to availing itself of choices later in a program’s life cycle. However, in the early stages, programs are often confronted with the choice between allocating scarce resources to enhance capability or maintaining future flexibility in terms of maintaining the system. Once the decision to forgo buying technical data is made, DOD’s leverage in terms of being able to compete maintenance support or to provide it in house is largely lost. Programs must then rely on other, less powerful tools to assure good performance and good prices. The data we collected on eight programs that are in the process of making decisions related to securing access to technical data indicate that DOD is considering its future needs, but final decisions have yet to be made in several cases. The department also does not have information on the approaches used by various programs with long-term maintenance contracts to incentivize contractor performance and gain insight into contractor costs to help ensure that the government is getting the best value for its investment. DOD is considering several policy and data- related initiatives that could improve its insight on these contracts, but these efforts are in the early stages of development. Gaining insight into the department’s use of long-term maintenance contracts as well as identifying lessons learned on what approaches work best to incentivize performance and control costs would help inform future acquisition strategies and reduce risk. To help inform DOD’s use of long-term maintenance contracts, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology, and Logistics, in coordination with cognizant offices within each of the military departments, to take the following two actions: Collect and analyze information on the use of long-term maintenance contracts by major weapon system programs; and Collect and disseminate lessons learned or best practices regarding the use of incentives and cost-control tools that can maximize the government’s leverage when considering the future use of such contracts. DOD provided written comments on a draft of this report, stating that it concurred with both recommendations. DOD stated that it planned to develop methodologies to collect the needed information and disseminate best practices and lessons learned, but did not provide timeframes for doing so. We recognize that weighing options will take some time, but encourage the department to do so in a timely fashion. DOD’s written response is reprinted in appendix II. DOD also provided technical comments that were incorporated as appropriate. We are sending copies of this report to the Secretary of Defense and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Belva M. Martin at (202) 512-4841 or [email protected] or Cary Russell at (202) 512-5431 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. To gain insight into the how long-term maintenance contracts are managed by the Department of Defense (DOD), we assessed (1) the extent to which DOD uses long-term maintenance contracts to support major weapon system programs, (2) DOD’s ability to select alternative maintenance services providers for its major weapon system programs, and (3) how long-term maintenance contracts have been structured to incentivize contractors’ performance and manage contractor costs. After consulting with DOD acquisition and logistics officials, for the purposes of this report we defined long-term maintenance contracts as those with a total potential period of performance of at least 5 years that provide depot-level maintenance services or support performance of maintenance functions. Additionally, we limited the scope of our review to include those long-term contracts that support major defense acquisition programs. The Federal Procurement Data System-Next Generation is the federal government’s current system for tracking information on contracting actions. from program offices and program executive offices in each of the military departments. However, due to data reliability issues and incomplete responses, GAO determined that it could not use the information collected with reasonable assurance of accuracy for department-wide analysis of long-term maintenance contracting use and management. Based on further discussions with military department officials, we reviewed 10 long-term contracts supporting seven major defense acquisition programs. We selected these contracts to represent each of the military departments and to illustrate different maintenance approaches. The programs we selected included the following: C-130 Hercules transport aircraft KC-10 Extender refueling tanker aircraft Joint Surveillance Target Attack Radar System (JSTARS) AH-64 Apache helicopter CH-47 Chinook helicopter MH-60 Seahawk helicopter T-45 Goshawk training aircraft To determine the extent to which DOD has the ability to select alternative maintenance services providers for its major weapon system programs, we reviewed DOD and military department policy and interviewed senior officials in the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics and military department officials to determine how maintenance options are considered and what factors contribute to retaining program flexibility for sourcing depot-level maintenance. For the programs we reviewed, we examined acquisition plans to determine how the government decided upon a contract-based approach to maintenance. We interviewed cognizant program officials to determine the factors that impact the government’s ability to change maintenance providers, focusing on the ability to transition contracted maintenance work to a government-owned and government-operated depot and the ability to recompete maintenance contracts. We also requested a list of major defense acquisition programs that recently went through an acquisition review and preliminary information on provisions for acquiring technical data rights. For the eight programs DOD identified as having such a review between October 2010 and October 2011, we interviewed program officials and reviewed acquisition documents, such as acquisition strategies and life-cycle sustainment plans, which described the rationale for the program’s plans to acquire technical data rights. To assess how long-term maintenance contracts were structured to incentivize contractors’ performance and manage contractor costs, we reviewed acquisition plans, contractual information, including pricing data and price negotiation memorandums, and interviewed cognizant acquisition and logistics officials to understand the incentives and tools used by program offices to motivate contractor performance and provide visibility into contractor costs. For the 10 contracts we selected, we reviewed programs’ use of monetary incentives such as award and incentive fees, performance bonuses, and downward price adjustments. Additionally, we reviewed programs’ use of contract term incentives, specifically award terms and incentive options, which can extend a contract’s period of performance. We analyzed incentive plans and contractor performance data to determine how performance was assessed, recorded, and resulted in the award of fee or additional years of contracted work. We also interviewed program officials on the use of incentives and compared prior versions of incentive plans to determine how incentive metrics changed over time. For the 10 contracts we reviewed, we identified the extent to which programs used incurred cost data, price renegotiations, and cost-based incentive metrics as a means to gain insight into contractor costs. We also interviewed officials from the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics, the military departments, and program offices on the benefits and risks associated with long-term contracts. We conducted this performance audit from February 2011 through May 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contacts named above, Carleen Bennett, Assistant Director; Timothy DiNapoli, Assistant Director; Steven Banovac; Lee Cooper; Julia Kennon; John Krump; Wiktor Niewiadomski; Bob Swierczek; and Tom Twambly made key contributions to this report. | DOD spends billions annually to maintain its weapon systems and, at times, uses long-term maintenance contracts with a potential period of performance of 5 years or more. These contracts can encourage contractors to invest in new facilities, equipment, and processes, but may hinder DODs ability to incentivize contractors performance and control costs, especially in the absence of a competitive environment or if DOD does not acquire access to technical data that can enable DOD to select an alternative maintenance provider. GAO was asked to evaluate (1) the extent to which DOD uses long-term maintenance contracts, (2) DODs ability to select alternative maintenance providers, and (3) how these contracts have been structured to incentivize performance and manage cost. GAO reviewed a nongeneralizable sample of 10 long-term contracts to illustrate different maintenance approaches. GAO interviewed program officials and reviewed contract documentation. GAO also reviewed information on eight programs recently reviewed by DOD to determine how these programs addressed technical data needs. At the departmental level, neither the Department of Defense (DOD) nor the individual military departments know the extent to which weapon system programs rely on long-term maintenance contracts. DOD policy requires DOD and the military departments to approve acquisition strategies and lifecycle sustainment plans, which include information on contractor support, but DOD officials reported that they do not collect information on the use of long-term contracts. DODs limited visibility over long-term maintenance contracts reflects broader DOD challenges with managing services acquisition. GAOs past work has identified the need for DOD to obtain better data on its contracted services to enable it to make more strategic decisions. DOD is considering a number of policy- and data-related initiatives that could improve its knowledge of these contracts, but these efforts are in the early stages of development. Decisions made early in the acquisition process can limit DODs ability to select alternative maintenance providers over the life cycle of a weapon system program. Program officials believed that DOD had the ability to select alternative service providers for half of the contracts GAO reviewed, as DOD either had sufficient technical data or there was an existing competitive environment. DOD officials believed the lack of technical data, funding, or expertise would hinder them from selecting alternative service providers on the other contracts GAO reviewed. Recent legislation and DODs 2010 efficiency initiatives emphasize the importance of technical data considerations. GAO found that eight weapon systems that underwent DOD acquisition-related reviews between October 2010 and October 2011 considered technical data issues, but not all have determined the extent to which they will acquire these data or the cost to do so. Once the decision is made to use long-term contracts, DOD faces choices on how to best incentivize contractor performance and manage costs. GAO found that the 10 long-term maintenance contracts reviewed varied in terms of the incentives employed and tools used to gain insight into contractor costs. For example, GAO found that all 5 contracts with the longest durations, potentially ranging from 9 to 22 years, used monetary incentives such as award or incentive fees, or contract term incentives that can extend the life of the contract by several years. However, DOD and program officials expressed some concerns about the lack of insight on contractors costs. In two cases, program offices established fixed prices for the entire potential length of the 9- and 15-year contracts without the ability to renegotiate prices or obtain incurred cost data. In comparison to the contracts with the longest durations, the five contracts GAO reviewed with maximum lengths of 5 years made less use of incentives or cost-control tools and generally did not have the ability to renegotiate contract prices, but program officials believed that the shorter-term nature of the contracts mitigated some of their risks. DOD does not collect information concerning the effectiveness of the various incentives or cost control tools used on long-term maintenance contracts, but it has identified efforts made by individual programs to improve acquisition of maintenance services. Developing lessons learned on what incentives and cost-control tools work best would help inform future acquisition strategies and reduce risk. GAO recommends that DOD collect information on the extent to which DOD uses long-term maintenance contracts and develop lessons learned regarding the use of incentives and cost-control tools. DOD concurred with each of the recommendations and indicated that it would develop methodologies to implement them. |
The credibility of USDA’s efforts to correct long-standing problems in resolving customer and employee discrimination complaints has been undermined by faulty reporting of complaint data, including disparities we found when comparing various ASCR sources of data. When ASCR was created in 2003, there was an existing backlog of complaints that had not been adjudicated. In response, the Assistant Secretary for Civil Rights at that time called for a concerted 12-month effort to reduce this backlog and to put lasting improvements in place to prevent future complaint backlogs. In July 2007, ASCR reported that it had reduced its backlog of 690 complaints and held the complaint inventory to manageable levels through fiscal year 2005. However, the data ASCR reported lack credibility because they were inconsistent with other complaint data it reported a month earlier to a congressional subcommittee. The backlog later surged to 885 complaints, according to ASCR data. Furthermore, the Assistant Secretary’s letter transmitting these data stated that while they were the best available, they were incomplete and unreliable. In addition, GAO and USDA’s OIG have identified other problems with ASCR’s data, including the need for better management controls over the entry and validation of these data. In addition, some steps that ASCR took to speed up its investigations and decisions on complaints in 2004 may have adversely affected the quality of its work. ASCR’s plan called for USDA’s investigators and adjudicators, who prepare agency decisions, to nearly double their normal pace of casework for about 12 months. ASCR’s former Director, Office of Adjudication and Compliance, stated that this increased pace led to many “summary” decisions on employees’ complaints that did not resolve questions of fact, with the result that many decisions were appealed to the Equal Employment Opportunity Commission. This official also said these summary decisions “could call into question the integrity of the process because important issues were being overlooked.” In addition, inadequate working relationships and communications within ASCR, as well as fear of retaliation for reporting management-related problems, complicated ASCR’s efforts to produce quality work products. In August 2008, ASCR officials stated they would develop standard operating procedures for the Office of Adjudication and Compliance and had provided USDA staff training on communication and conflict management, among other things. While these are positive steps, they do not directly respond to whether USDA is adequately investigating complaints, developing thorough complaint decisions, and addressing the problems that gave rise to discrimination complaints within ASCR. The Food, Conservation, and Energy Act of 2008 (2008 Farm Bill), enacted in June 2008, states that it is the sense of Congress that all pending claims and class actions brought against USDA by socially disadvantaged farmers and ranchers should be resolved in an expeditious and just manner. In addition, the 2008 Farm Bill requires USDA to report annually on, among other things, the number of customer and employee discrimination complaints filed against each USDA agency, and the length of time the agency took to process each complaint. In October 2008, we recommended that the Secretary of Agriculture take the following actions related to resolving discrimination complaints: Prepare and implement an improvement plan for resolving discrimination complaints that sets time frame goals and provides management controls for resolving complaints from beginning to end. Develop and implement a plan to ensure the accuracy, completeness and reliability of ASCR’s databases on customer and employee complaints, and that provides for independent validation of ASCR’s data quality. Obtain an expert, independent, and objective legal examination of the basis, quality, and adequacy of a sample of USDA’s prior investigations and decisions on civil rights complaints, along with suggestions for improvement. USDA agreed with the first two recommendations, but initially disagreed with the third, asserting that its internal system of legal sufficiency addresses our concerns, works well, and is timely and effective. Given the substantial evidence of civil rights case delays and questions about the integrity of USDA’s civil rights casework, we believe this recommendation remains valid and necessary to restore confidence in USDA’s civil rights decisions. In April 2009, ASCR officials said that USDA now agrees with all three of the recommendations and that the department is taking steps to implement them. These steps include hiring a consultant to assist ASCR with setting timeframe goals and establishing proper management controls; a contractor to help move data from ASCR’s three complaint databases into one; and a firm to provide ASCR with independent legal advice on developing standards on what constitutes a program complaint and actions needed to adjudicate those complaints. As required by the 2002 farm bill, ASCR has published three annual reports on the participation rate of socially disadvantaged farmers and ranchers in USDA programs. The reports are to provide statistical data on program participants by race and ethnicity, among other things. However, much of these data are unreliable because USDA lacks a uniform method of reporting and tabulating race and ethnicity data among its component agencies. According to USDA, to collect standardized demographic data directly from participants in many of its programs, it must first obtain OMB’s approval. In the meantime, most of USDA’s demographic data are gathered by visual observation of program applicants, a method that is inherently unreliable and subjective, especially for determining ethnicity. To address this problem, ASCR published a notice in the Federal Register in 2004 seeking public comment on its plan to collect standardized data on race, ethnicity, gender, national origin, and age for all its programs. However, while it received some comments, ASCR has not moved forward to finalize this rulemaking and obtain OMB’s approval to collect these data. The 2008 Farm Bill contains several provisions related to reporting on minority farmers’ participation in USDA programs. First, it requires USDA to annually compile program application and participation rate data for each program serving those farmers. These reports are to include the raw numbers and participation rates for the entire United States and for each state and county. Second, it requires USDA to ensure, to the maximum extent practicable, that the Census of Agriculture and studies by USDA’s Economic Research Service accurately document the number, location, and economic contributions of minority farmers in agricultural production. In October 2008, to address underlying data reliability issues, as discussed, and potential steps USDA could take to facilitate data analysis by users, we recommended that the Secretary of Agriculture work expeditiously to obtain OMB’s approval to collect the demographic data necessary for reliable reporting on race and ethnicity by USDA program. USDA agreed with the recommendation. In April 2009, ASCR officials indicated that a draft Federal Register notice requesting OMB’s approval to collect these data for Farm Service Agency, Natural Resources Conservation Service, and Rural Development programs is being reviewed within USDA. These officials said they hoped this notice, which they considered an initial step toward implementing our recommendation, would be published and implemented in time for USDA’s field offices to begin collecting these data by October 1, 2009. According to these officials, USDA also plans to seek, at a later time, authority to collect such data on participants in all USDA programs. In light of USDA’s history of civil rights problems, better strategic planning is vital. Results-oriented strategic planning provides a road map that clearly describes what an organization is attempting to achieve and, over time, it can serve as a focal point for communication with Congress and the public about what has been accomplished. Results-oriented organizations follow three key steps in their strategic planning: (1) they define a clear mission and desired outcomes, (2) they measure performance to gauge progress, and (3) they use performance information for identifying performance gaps and making program improvements. ASCR has started to develop a results-oriented approach as illustrated in its first strategic plan, Assistant Secretary for Civil Rights: Strategic Plan, Fiscal Years 2005-2010, and its ASCR Priorities for Fiscal Years 2007 and 2008. However, ASCR’s plans do not include fundamental elements required for effective strategic planning. In particular, we found that the interests of ASCR’s stakeholders—including representatives of community-based organizations and minority interest groups—are not explicitly reflected in its strategic plan. For example, we found that ASCR’s stakeholders are interested in improvements in (1) USDA’s methods of delivering farm programs to facilitate access by underserved producers; (2) the county committee system, so that stakeholders are better represented in local decisions; and (3) the diversity of USDA employees who work with minority producers. A more complete list of these interests is included in the appendix. In addition, ASCR’s strategic plan does not link to the plans of other USDA agencies or the department and does not discuss the potential for linkages to be developed. ASCR could also better measure performance to gauge progress, and it has not yet started to use performance information for identifying USDA performance gaps. For example, ASCR measures USDA efforts to ensure USDA customers have equal and timely access to programs by reporting on the numbers of participants at USDA workshops rather than measuring the results of its outreach efforts on access to benefits and services. Moreover, the strategic plan does not make linkages between levels of funding and ASCR’s anticipated results; without such a discussion, it is not possible to determine whether ASCR has the resources needed to achieve its strategic goal of, for example, strengthening partnerships with historically black land-grant universities through scholarships provided by USDA. To help ensure access to and equitable participation in USDA’s programs and services, the 2008 Farm Bill provided for establishing the Office of Advocacy and Outreach and charged it with, among other things, establishing and monitoring USDA’s goals and objectives to increase participation in USDA programs by small, beginning, and socially disadvantaged farmers and ranchers. As of April 2009, ASCR officials indicated that the Secretary of Agriculture plans to establish this office, but has not yet done so. In October 2008, we recommended that USDA develop a results-oriented department-level strategic plan for civil rights that unifies USDA’s departmental approach with that of ASCR and the newly created Office of Advocacy and Outreach and that is transparent about USDA’s efforts to address stakeholder concerns. USDA agreed with this recommendation. In April 2009, ASCR officials said they plan to implement this recommendation during the next department-wide strategic planning process, which occurs every 5 years. Noting that the current plan runs through 2010, these officials speculated that work on the new plan will start in the next few months. Our past work in addressing the problems of high-risk, underperforming federal agencies, as well as our reporting on results-oriented management, suggests three options that could benefit USDA’s civil rights performance. These options were selected based on our judgment that they (1) can help address recognized and long-standing problems in USDA’s performance, (2) have been used previously by Congress to improve aspects of agency performance, (3) have contributed to improved agency performance, and (4) will result in greater transparency over USDA’s civil rights performance. These options include (1) making USDA’s Assistant Secretary for Civil Rights subject to a statutory performance agreement, (2) establishing an agriculture civil rights oversight board, and (3) creating an ombudsman for agriculture civil rights matters. Our prior assessment of performance agreements used at several agencies has shown that these agreements have potential benefits that could help improve the performance of ASCR. Potential benefits that performance agreements could provide USDA include (1) helping to define accountability for specific goals and align daily operations with results- oriented programmatic goals, (2) fostering collaboration across organizational boundaries, (3) enhancing use of performance information to make program improvements, (4) providing a results-oriented basis for individual accountability, and (5) helping to maintain continuity of program goals during leadership transitions. Congress has required performance agreements in other federal offices and the results have been positive. For example, in 1998, Congress established the Department of Education’s Office of Federal Student Aid as the government’s first performance-based organization. This office had experienced long-standing financial and management weaknesses and we had listed the Student Aid program as high-risk since 1990. Congress required the office’s Chief Operating Officer to have a performance agreement with the Secretary of Education that was transmitted to congressional committees and made publicly available. In addition, the office was required to report to Congress annually on its performance, including the extent to which it met its performance goals. In 2005, because of the sustained improvements made by the office in its financial management and internal controls, we removed this program from our high-risk list. More recently, Congress has required statutory performance agreements for other federal executives, including for the Commissioners of the U.S. Patent and Trademark Office and the Under Secretary for Management of the Department of Homeland Security. A statutory performance agreement could benefit ASCR. The responsibilities assigned to USDA’s Assistant Secretary for Civil Rights were stated in general terms in both the 2002 Farm Bill and the Secretary’s memorandum establishing this position within USDA. The Secretary’s memorandum stated that the Assistant Secretary reports directly to the Secretary and is responsible for (1) ensuring USDA’s compliance with all civil rights laws and related laws, (2) coordinating administration of civil rights laws within USDA, and (3) ensuring that civil rights components are incorporated in USDA strategic planning initiatives. This set of responsibilities is broad in scope, and it does not identify specific performance expectations for the Assistant Secretary. A statutory performance agreement could assist in achieving specific expectations by providing additional incentives and mandatory public reporting. In October 2008, we suggested that Congress consider the option of making USDA’s Assistant Secretary for Civil Rights subject to a statutory performance agreement. USDA initially disagreed with this suggestion, in part stating that the Assistant Secretary’s responsibilities are spelled out in the 2002 and 2008 farm bills. In response, we noted, in part, that a statutory performance agreement would go beyond the existing legislation by requiring measurable organizational and individual goals in key performance areas. In April 2009, ASCR officials indicated that the department no longer disagrees with this suggestion. However, these officials expressed the hope that the actions they are taking or planning to improve the management of civil rights at USDA, such as obtaining an independent external analysis of program delivery, will preclude the need for this mechanism. Congress could also authorize a USDA civil rights oversight board to independently monitor, evaluate, approve, and report on USDA’s administration of civil rights activities, as it has for other federal activities. Oversight boards have often been used by the federal government—such as for oversight of public accounting, intelligence matters, civil liberties, and drug safety—to provide assurance that important activities are well done, to identify weaknesses that may need to be addressed, and to provide for transparency. For example, Congress established the Internal Revenue Service (IRS) Oversight Board in 1998 to oversee IRS’s administration of internal revenue laws and ensure that its organization and operation allow it to carry out its mission. At that time, IRS was considered to be an agency that was not effectively serving the public or meeting taxpayer needs. The board operates much like a corporate board of directors, tailored to fit the public sector. The board provides independent oversight of IRS administration, management, conduct, and the direction and supervision of the application of the internal revenue code. We have previously noted the work of the Internal Revenue Service Oversight Board—including, for example, the board’s independent analysis of IRS business systems modernization. Currently, there is no comparable independent oversight of USDA civil rights activities. In October 2008, we suggested that Congress consider the option of establishing a USDA civil rights oversight board to independently monitor, evaluate, approve, and report on USDA’s administration of civil rights activities. Such a board could provide additional assurance that ASCR management functions effectively and efficiently. USDA initially disagreed with this suggestion, stating that it would be unnecessarily bureaucratic and delay progress. In response, we noted that a well-operated oversight board could be the source of timely and wise counsel to help raise USDA’s civil rights performance. In April 2009, ASCR officials said that the department no longer disagrees with this suggestion. However, these officials expressed the hope that the actions they are taking or planning to address our recommendations to improve the management of civil rights at USDA will preclude the need for this mechanism. An ombudsman for USDA civil rights matters could be created to address the concerns of USDA customers and employees. Many other agencies have created ombudsman offices for addressing employees’ concerns, as authorized by the Administrative Dispute Resolution Act. However, an ombudsman is not merely an alternative means of resolving employees’ disputes; rather, the ombudsman is a neutral party who uses a variety of procedures, including alternative dispute resolution techniques, to deal with complaints, concerns, and questions. Ombudsmen who handle concerns and inquiries from the public—external ombudsmen—help agencies be more responsive to the public through impartial and independent investigation of citizens’ complaints, including those of people who believe their concerns have not been dealt with fairly and fully through normal channels. For example, we reported that ombudsmen at the Environmental Protection Agency serve as points of contact for members of the public who have concerns about certain hazardous waste cleanup activities. We also identified the Transportation Security Administration ombudsman as one who serves external customers and is responsible for recommending and influencing systemic change where necessary to improve administration operations and customer service. Within the federal workplace, ombudsmen provide an informal alternative to existing and more formal processes to deal with employees’ workplace conflicts and other organizational climate issues. USDA faces concerns of fairness and equity from both customers and employees—a range of issues that an ombudsman could potentially assist in addressing. A USDA ombudsman who is independent, impartial, fully capable of conducting meaningful investigations and who can maintain confidentiality could assist in resolving these civil rights concerns. As of April 2007, 12 federal departments and 9 independent agencies reported having 43 ombudsmen. In October 2008, we recommended that USDA explore the potential for an ombudsman office to contribute to addressing the civil rights concerns of USDA customers and employees, including seeking legislative authority, as appropriate, to establish such an office and to ensure its effectiveness, and advise USDA’s congressional oversight committees of the results. USDA agreed with this recommendation. In April 2009, ASCR officials indicated that the Assistant Secretary for Civil Rights has convened a team to study the ombudsman concept and to make recommendations by September 30, 2009, to the Secretary of Agriculture for establishing an ombudsman office. USDA has been addressing allegations of discrimination for decades and receiving recommendations for improving its civil rights functions without achieving fundamental improvements. One lawsuit has cost taxpayers about a billion dollars in payouts to date, and several other groups are seeking redress for similar alleged discrimination. While ASCR’s established policy is to fairly and efficiently respond to complaints of discrimination, its efforts to establish the management system necessary to implement the policy have fallen short, and significant deficiencies remain. Unless USDA addresses several fundamental concerns about resolving discrimination complaints—including the lack of credible data on the numbers, status, and management of complaints; the lack of specified time frames and management controls for resolving complaints; questions about the quality of complaint investigations; and concerns about the integrity of final decision preparation—the credibility of USDA efforts to resolve discrimination complaints will be in doubt. In addition, unless USDA obtains accurate data on minority participation in USDA programs, its reports on improving minority participation in USDA programs will not be reliable or useful. Furthermore, without better strategic planning and meaningful performance measures, it appears unlikely that USDA management will be fully effective in achieving its civil rights mission. Given the new Administration’s commitment to giving priority attention to USDA’s civil rights problems, various options may provide a road map to correcting long-standing management deficiencies that have given rise to these problems. Specifically, raising the public profile for transparency and accountability through means such as a statutory performance agreement between the Secretary of Agriculture and the Assistant Secretary for Civil Rights, a civil rights oversight board, and an ombudsman for addressing customers’ and employees’ civil rights concerns would appear to be helpful steps because they have proven to be effective in raising the performance of other federal agencies. These options could lay a foundation for clarity about the expectations USDA must meet to restore confidence in its civil rights performance. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions that you or other Members of the Subcommittee may have. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. For further information about this testimony, please contact Lisa Shames, Director, Natural Resources and Environment, (202) 512-2649 or [email protected]. Key contributors to this statement were James R. Jones, Jr., Assistant Director; Kevin S. Bray; Nancy Crothers; Nico Sloss; and Alex M. Winograd. USDA outreach programs for underserved producers could be much better. Systematic data on minority participation in USDA programs are not available. The 10708 Report and Minority Farm Register have been ineffective. Partnerships with community-based organizations could be better used. Methods of USDA program delivery need to better facilitate the participation of underserved producers and address their needs. USDA could do more to provide assistance in accessing markets and programs. USDA could better address cultural and language differences for providing services. Some USDA program rules and features hinder participation by underserved producers. Some USDA employees have little incentive to work with small and minority producers. County offices working with underserved producers continue to lack diversity, and some have poor customer service or display discriminatory behaviors toward underserved producers. USDA lacks a program that addresses farmworker needs. There continue to be reports of cases where USDA has not processed loans for underserved producers. Some Hmong poultry farmers with guaranteed loans facilitated by USDA are experiencing foreclosures. The county committee system does not represent minority producers well. Minority advisers are ineffective because they have no voting power. USDA has not done enough to make underserved producers fully aware of county committee elections, and underserved producers have difficulties winning elections. There is a lack of USDA investment in research and extension services that would determine the extent of minority needs. The Census of Agriculture needs to better count minority producers. USDA may continue to be foreclosing on farms belonging to producers who are awaiting decisions on discrimination complaints. ASCR needs authority to exercise leadership for making changes at USDA. USDA and ASCR need additional resources to carry out civil rights functions. Greater diversity among USDA employees would facilitate USDA’s work with minority producers. Producers must still access services through some USDA employees who discriminated against them. The Office of Adjudication and Compliance needs better management structure and function. Backlogs of discrimination complaints need to be addressed. Alternative dispute resolution techniques to resolve informal employee complaints should be used consistently and documented. Civil rights compliance reviews of USDA agencies are behind schedule and should be conducted. USDA’s Office of General Counsel continues to be involved in complaint cases. U.S. Department of Agriculture: Recommendations and Options to Address Management Deficiencies in the Office of the Assistant Secretary for Civil Rights. GAO-09-62. Washington, D.C.: October 22, 2008. U.S. Department of Agriculture: Management of Civil Rights Efforts Continues to Be Deficient Despite Years of Attention. GAO-08-755T. Washington, D.C.: May 14, 2008. Pigford Settlement: The Role of the Court-Appointed Monitor. GAO-06-469R. Washington, D.C.: March 17, 2006. Department of Agriculture: Hispanic and Other Minority Farmers Would Benefit from Improvements in the Operations of the Civil Rights Program. GAO-02-1124T. Washington, D.C.: September 25, 2002. Department of Agriculture: Improvements in the Operations of the Civil Rights Program Would Benefit Hispanic and Other Minority Farmers. GAO-02-942. Washington, D.C.: September 20, 2002. U.S. Department of Agriculture: Resolution of Discrimination Complaints Involving Farm Credit and Payment Programs. GAO-01-521R. Washington, D.C.: April 12, 2001. U.S. Department of Agriculture: Problems in Processing Discrimination Complaints. T-RCED-00-286. Washington, D.C.: September 12, 2000. | For decades, there have been allegations of discrimination in the U.S. Department of Agriculture (USDA) programs and workforce. Reports and congressional testimony by the U.S. Commission on Civil Rights, the U.S. Equal Employment Opportunity Commission, a former Secretary of Agriculture, USDA's Office of Inspector General, GAO, and others have described weaknesses in USDA's programs--in particular, in resolving complaints of discrimination and in providing minorities access to programs. The Farm Security and Rural Investment Act of 2002 authorized the creation of the position of Assistant Secretary for Civil Rights (ASCR), giving USDA an executive that could provide leadership for resolving these long-standing problems. This testimony focuses on USDA's efforts to (1) resolve discrimination complaints, (2) report on minority participation in USDA programs, and (3) strategically plan its efforts. This testimony is based on new and prior work, including analysis of ASCR's strategic plan; discrimination complaint management; and about 120 interviews with officials of USDA and other federal agencies, as well as 20 USDA stakeholder groups. USDA officials reviewed the facts upon which this statement is based, and we incorporated their additions and clarifications as appropriate. GAO plans a future report with recommendations. ASCR's difficulties in resolving discrimination complaints persist--ASCR has not achieved its goal of preventing future backlogs of complaints. At a basic level, the credibility of USDA's efforts has been and continues to be undermined by ASCR's faulty reporting of data on discrimination complaints and disparities in ASCR's data. Even such basic information as the number of complaints is subject to wide variation in ASCR's reports to the public and the Congress. Moreover, ASCR's public claim in July 2007 that it had successfully reduced a backlog of about 690 discrimination complaints in fiscal year 2004 and held its caseload to manageable levels, drew a questionable portrait of progress. By July 2007, ASCR officials were well aware they had not succeeded in preventing future backlogs--they had another backlog on hand, and this time the backlog had surged to an even higher level of 885 complaints. In fact, ASCR officials were in the midst of planning to hire additional attorneys to address that backlog of complaints including some ASCR was holding dating from the early 2000s that it had not resolved. In addition, some steps ASCR had taken may have actually been counter-productive and affected the quality of its work. For example, an ASCR official stated that some employees' complaints had been addressed without resolving basic questions of fact, raising concerns about the integrity of the practice. Importantly, ASCR does not have a plan to correct these many problems. USDA has published three annual reports--for fiscal years 2003, 2004, and 2005--on the participation of minority farmers and ranchers in USDA programs, as required by law. USDA's reports are intended to reveal the gains or losses that these farmers have experienced in their participation in USDA programs. However, USDA considers the data it has reported to be unreliable because they are based on USDA employees' visual observations about participant's race and ethnicity, which may or may not be correct, especially for ethnicity. USDA needs the approval of the Office of Management and Budget (OMB) to collect more reliable data. ASCR started to seek OMB's approval in 2004, but as of May 2008 had not followed through to obtain approval. ASCR staff will meet again on this matter in May 2008. GAO found that ASCR's strategic planning is limited and does not address key steps needed to achieve the Office's mission of ensuring USDA provides fair and equitable services to all customers and upholds the civil rights of its employees. For example, a key step in strategic planning is to discuss the perspectives of stakeholders. ASCR's strategic planning does not address the diversity of USDA's field staff even though ASCR's stakeholders told GAO that such diversity would facilitate interaction with minority and underserved farmers. Also, ASCR could better measure performance to gauge its progress in achieving its mission. For example, it counts the number of participants in training workshops as part of its outreach efforts rather than access to farm program benefits and services. Finally, ASCR's strategic planning does not link levels of funding with anticipated results or discuss the potential for using performance information for identifying USDA's performance gaps. |
Federal agencies increasingly rely on computerized information systems and electronic data to conduct operations and carry out their missions. Protecting federal computer systems has never been more important due to advances in the sophistication and effectiveness of attack technology and methods, the rapid growth of zero-day exploits and attacks, and the increasing number of security incidents occurring at organizations and federal agencies. Information security is especially important for federal agencies, which increasingly use information systems to deliver services to the public and to ensure the confidentiality, integrity, and availability of information and information systems. Without proper safeguards, there is risk of data theft, compromise, or loss by individuals and groups due to negligence or malicious intent within or outside of the organization. To fully understand the potential significance of information security weaknesses, it is necessary to link them to the risks they present to federal operations and assets. Virtually all federal operations are supported by automated systems and electronic data, and agencies would find it difficult, if not impossible, to carry out their missions and account for their resources without these information assets. The weaknesses place a broad array of federal operations and assets at risk. For example, Resources, such as federal payments and collections, could be lost or stolen. Computer resources could be used for unauthorized purposes or to launch attacks on other computer systems. Sensitive information, such as taxpayer data, social security records, medical records, and proprietary business information could be inappropriately disclosed, browsed, or copied for purposes of industrial espionage or other types of crime. Critical operations, such as those supporting national defense and emergency services, could be disrupted. Data could be modified or destroyed for purposes of fraud, identity theft, or disruption. Agency missions could be undermined by embarrassing incidents that result in diminished confidence in the ability of federal organizations to conduct operations and fulfill their responsibilities. Recognizing the importance of securing federal systems and data, Congress passed FISMA in 2002, which set forth a comprehensive framework for ensuring the effectiveness of information security controls over information resources that support federal operations and assets. FISMA’s framework creates a cycle of risk management activities necessary for an effective security program, and these activities are similar to the principles noted in our study of the risk management activities of leading private sector organizations—assessing risk, establishing a central management focal point, implementing appropriate policies and procedures, promoting awareness, and monitoring and evaluating policy and control effectiveness. In order to ensure the implementation of this framework, the act assigns specific responsibilities to agency heads, chief information officers (CIO), IGs, and NIST (depicted in fig. 1). It also assigns responsibilities to OMB, which include developing and overseeing the implementation of policies, principles, standards, and guidelines on information security and reviewing agency information security programs, at least annually, and approving or disapproving them. FISMA requires each agency, including agencies with national security systems, to develop, document, and implement an agencywide information security program to provide security for the information and information systems that support the operations and assets of the agency, including those provided or managed by another agency, contractor, or other source. Specifically, it requires information security programs that, among other things, include periodic assessments of the risk and magnitude of harm that could result from the unauthorized access, use, disclosure, disruption, modification, or destruction of information or information systems; risk-based policies and procedures that cost effectively reduce information security risks to an acceptable level and ensure that information security is addressed throughout the life cycle of each information system; subordinate plans, for providing adequate information security for networks, facilities, and systems or groups of information systems, as appropriate; security awareness training for agency personnel, including contractors and other users of information systems that support the operations and assets of the agency; periodic testing and evaluation of the effectiveness of information security policies, procedures, and practices, performed with a frequency depending on risk, but no less than annually, and that includes testing of management, operational, and technical controls for every system identified in the agency’s required inventory of major information systems; a process for planning, implementing, evaluating, and documenting remedial action to address any deficiencies in the information security policies, procedures, and practices of the agency; procedures for detecting, reporting, and responding to security incidents; plans and procedures to ensure continuity of operations for information systems that support the operations and assets of the agency. In addition, agencies must produce an annually updated inventory of major information systems (including major national security systems) operated by the agency or under its control, which includes an identification of the interfaces between each system and all other systems or networks, including those not operated by or under the control of the agency. FISMA also requires each agency to report annually to OMB, selected congressional committees, and the Comptroller General on the adequacy of its information security policies, procedures, practices, and compliance with requirements. In addition, agency heads are required to report annually the results of their independent evaluations to OMB, except to the extent that an evaluation pertains to a national security system; then only a summary and assessment of that portion of the evaluation needs to be reported to OMB. Under FISMA, the IG for each agency must perform an independent annual evaluation of the agency’s information security program and practices. The evaluation should include testing of the effectiveness of information security policies, procedures, and practices of a representative subset of agency systems. In addition, the evaluation must include an assessment of the compliance with the act and any related information security policies, procedures, standards, and guidelines. For agencies without an IG, evaluations of nonnational security systems must be performed by an independent external auditor. Evaluations related to national security systems are to be performed by an entity designated by the agency head. Under FISMA, NIST is tasked with developing, for systems other than national security systems, standards and guidelines that must include, at a minimum (1) standards to be used by all agencies to categorize all their information and information systems based on the objectives of providing appropriate levels of information security, according to a range of risk levels; (2) guidelines recommending the types of information and information systems to be included in each category; and (3) minimum information security requirements for information and information systems in each category. NIST must also develop a definition of and guidelines for detection and handling of information security incidents as well as guidelines, developed in conjunction with the Department of Defense and the National Security Agency, for identifying an information system as a national security system. The law also assigns other information security functions to NIST, including providing technical assistance to agencies on such elements as compliance with the standards and guidelines and the detection and handling of information security incidents; evaluating private-sector information security policies and practices and commercially available information technologies to assess potential application by agencies; evaluating security policies and practices developed for national security systems to assess their potential application by agencies; and conducting research, as needed, to determine the nature and extent of information security vulnerabilities and techniques for providing cost- effective information security. NIST is also required to prepare an annual public report on activities undertaken in the previous year and planned for the coming year. FISMA states that the Director of OMB shall oversee agency information security policies and practices, including developing and overseeing the implementation of policies, principles, standards, and guidelines on information security; requiring agencies to identify and provide information security protections commensurate with risk and magnitude of the harm resulting from the unauthorized access, use, disclosure, disruption, modification, or destruction of information collected or maintained by or on behalf of an agency, or information systems used or operated by an agency, or by a contractor of an agency, or other organization on behalf of an agency; coordinating information security policies and procedures with related information resource management policies and procedures; overseeing agency compliance with FISMA to enforce accountability; and reviewing at least annually, and approving or disapproving, agency information security programs. In addition, the act requires that OMB report to Congress no later than March 1 of each year on agency compliance with FISMA. Significant control weaknesses in information security policies and practices threaten the confidentiality, integrity, and availability of critical information and information systems used to support the operations, assets, and personnel of most federal agencies. These persistent weaknesses expose sensitive data to significant risk, as illustrated by recent reported incidents at various agencies. Further, our work and reviews by IGs note significant information security control deficiencies that place a broad array of federal operations and assets at risk. Since January 2006, federal agencies have reported a spate of security incidents that have put sensitive data at risk, including the theft, loss, or improper disclosure of personally identifiable information on millions of Americans, thereby exposing them to loss of privacy and potential harm associated with identity theft. Agencies have experienced a wide range of incidents involving data loss or theft, computer intrusions, and privacy breaches, underscoring the need for improved security practices. The following reported examples illustrate that a broad array of federal information and assets are at risk. The Department of Veterans Affairs (VA) announced that computer equipment containing personally identifiable information on approximately 26.5 million veterans and active duty members of the military was stolen from the home of a VA employee. Until the equipment was recovered, veterans did not know whether their information was likely to be misused. In June, VA sent notices to the affected individuals that explained the breach and offered advice on steps to take to reduce the risk of identity theft. The equipment was eventually recovered, and forensic analysts concluded that it was unlikely that the personal information contained therein was compromised. A Centers for Medicare and Medicaid Services contractor reported the theft of a contractor employee’s laptop computer from his office. The computer contained personal information including names, telephone numbers, medical record numbers, and dates of birth of 49,572 Medicare beneficiaries. The Department of Agriculture (USDA) was notified that it had posted personal information on a Web site. Analysis by USDA later determined that the posting had affected approximately 38,700 individuals, who had been awarded funds through the Farm Service Agency or USDA Rural Development program. That same day, all identification numbers associated with USDA funding were removed from the Web site. USDA is continuing its effort to identify and contact all persons who may have been affected. A contractor for USDA’s Farm Services Agency inadvertently released informational compact discs that contained Social Security numbers and tax identification data on approximately 350,000 tobacco producers/contract holders under the agency’s Tobacco Transition Payment Program. The Transportation Security Administration (TSA) announced a data security incident involving approximately 100,000 archived employment records of individuals employed by the agency from January 2002 until August 2005. An external hard drive containing personnel data, such as Social Security number, date of birth, payroll information, and bank account and routing information, was discovered missing from a controlled area at the TSA Headquarters Office of Human Capital. The Census Bureau reported 672 missing laptops, of which 246 contained some degree of personal data. Of the missing laptops containing personal information, almost half (104) were stolen, often from employees’ vehicles, and another 113 were not returned by former employees. Commerce reported that employees were not held accountable for not returning their laptops, but the department did not report on the disposition of the remaining 29. Officials at the Department of Commerce’s Bureau of Industry and Security discovered a security breach in July 2006. In investigating this incident, officials were able to review firewall logs for an 8-month period prior to the initial detection of the incident, but they were unable to clearly define the amount of time that perpetrators were inside the department’s computers, or find any evidence to show that data was lost as a result. The Department of Defense (Navy) Marine Corps reported the loss of a thumb drive containing personally identifiable information—names, Social Security numbers, and other information—of 207,570 enlisted Marines serving on active duty from 2001 through 2005. The information was being used for a research project on retention of service personnel. Navy officials considered the risk from the breach to be greatly diminished since the thumb drive was lost on a government installation and the drive’s data were readable only through software that was password protected and considered in limited distribution. The Treasury Inspector General For Tax Administration reported that approximately 490 computers at the Internal Revenue Service (IRS) were lost or stolen between January 2003, and June 2006. Additionally, 111 incidents occurred within IRS facilities, suggesting that employees were not storing their laptop computers in a secured area while they were away from the office. The IG concluded that it was very likely that a large number of the lost or stolen computers contained unencrypted data and also found other computer devices, such as flash drives, CDs, and DVDs, on which sensitive data were not always encrypted. The Department of State experienced a security breach on its unclassified network, which daily processes about 750,000 e-mails and instant messages from more than 40,000 employees and contractors at 100 domestic and 260 overseas locations. The breach involved an e-mail containing what was thought to be an innocuous attachment. However, the e-mail contained code to exploit vulnerabilities in a well-known application for which no security patch existed at that time. Because the vendor was unable to expedite testing and deploy a new patch, the department developed its own temporary fix to protect systems from being exploited further. In addition, the department sanitized the infected computers and servers, rebuilt them, changed passwords, installed critical patches, and updated their antivirus software. Based on the experience of VA and other federal agencies in responding to data breaches, we identified numerous lessons learned regarding how and when to notify government officials, affected individuals, and the public. As discussed later in this report, OMB has issued guidance that largely addresses these lessons. As illustrated by recent security incidents, significant weaknesses continue to threaten the confidentiality, integrity, and availability of critical information and information systems used to support the operations, assets, and personnel of federal agencies. In their fiscal year 2006 financial statement audit reports, 21 of 24 major agencies indicated that deficient information security controls were either a reportable condition or a material weakness (see fig. 2). Our audits continue to identify similar weaknesses in nonfinancial systems. Similarly, in their annual reporting under 31 U.S.C. § 3512 (commonly referred to as the Federal Managers’ Financial Integrity Act of 1982), 17 of 24 agencies reported shortcomings in information security, including 7 that considered it a material weakness. IGs have also noted the seriousness of information security, with 21 of 24 including it as a “major management challenge.” According to our reports and those of IGs, persistent weaknesses appear in the five major categories of information system controls: (1) access controls, which ensure that only authorized individuals can read, alter, or delete data; (2) configuration management controls, which provide assurance that only authorized software programs are implemented; (3) segregation of duties, which reduces the risk that one individual can independently perform inappropriate actions without detection; (4) continuity of operations planning, which provides for the prevention of significant disruptions of computer-dependent operations; and (5) an agencywide information security program, which provides the framework for ensuring that risks are understood and that effective controls are selected and properly implemented. Most agencies continue to have weaknesses in each of these categories, as shown in figure 3. In our prior reports, we have made hundreds of specific recommendations to the agencies to mitigate the weaknesses identified. Similarly, the IGs have issued specific recommendations as part of their information security review work. A basic management control objective for any organization is to protect data supporting its critical operations from unauthorized access, which could lead to improper modification, disclosure, or deletion of the data. Organizations accomplish this task by designing and implementing controls that are intended to prevent, limit, and detect access to computing resources (computers, networks, programs, and data), thereby protecting these resources from unauthorized use, modification, loss, and disclosure. Access controls can be both electronic and physical. Electronic access controls include those related to user identification and authentication, authorization, boundary protection, cryptography, and audit and monitoring. Physical security controls are important for protecting computer facilities and resources from espionage, sabotage, damage, and theft. These controls involve restricting physical access to computer resources, usually by limiting access to the buildings and rooms in which they are housed and enforcing usage restrictions and implementation guidance for portable and mobile devices. Twenty-two major agencies had access control weaknesses. Analysis of our recent reports have identified that the majority of information security control weaknesses pertained to access controls (see fig. 4). For example, agencies did not consistently (1) identify and authenticate users to prevent unauthorized access; (2) enforce the principle of least privilege to ensure that authorized access was necessary and appropriate; (3) establish sufficient boundary protection mechanisms; (4) apply encryption to protect sensitive data on networks and portable devices; and (5) log, audit, and monitor security-relevant events. Agencies also lacked effective controls to restrict physical access to information assets. A computer system must be able to identify and authenticate different users so that activities on the system can be linked to specific individuals. When an organization assigns unique user accounts to specific users, the system is able to distinguish one user from another—a process called identification. The system also must establish the validity of a user’s claimed identity by requesting some kind of information, such as a password, that is known only by the user—a process known as authentication. Several agencies have not adequately controlled user accounts and passwords to ensure that only authorized individuals are granted access to its systems and data. For example, several agencies did not always implement strong passwords—using vendor-default or easy-to-guess passwords, or having the minimum password length set to zero. One agency’s staff shared logon accounts and passwords when accessing a database production server for the procurement system. By allowing users to share accounts and passwords, individual accountability for authorized system activity as well as unauthorized system activity could be lost. Consequently, users could create short passwords, which tend to be easier to guess or crack than longer passwords. Without appropriate controls over identification and authentication, agencies are at increased risk of unauthorized access. Authorization is the process of granting or denying access rights and permissions to a protected resource, such as a network, a system, an application, a function, or a file. A key component of granting or denying access rights is the concept of “least privilege.” Least privilege is a basic principle for securing computer resources and information. This principle means that users are granted only those access rights and permissions that they need to perform their official duties. To restrict legitimate users’ access to only those programs and files that they need to do their work, organizations establish access rights and permissions. “User rights” are allowable actions that can be assigned to users or to groups of users. File and directory permissions are rules that regulate which users can access a particular file or directory and the extent of that access. To avoid unintentionally authorizing users access to sensitive files and directories, an organization must give careful consideration to its assignment of rights and permissions. Several agencies continued to imprudently grant rights and permissions that allowed more access than users needed to perform their jobs. For example, one agency had granted users of a database system the access rights to create or change sensitive system files—even though they did not have a legitimate business need for this access. Further, the permissions for sensitive system files also inappropriately allowed all users to read, update, or execute them. These types of excessive privileges provide opportunities for individuals to circumvent security controls. In another instance, each user on one organization’s network was permitted to have access to sensitive Privacy Act-protected information including names, addresses, and Social Security numbers of individuals. Once a Social Security number is obtained fraudulently, it can then be used to create a false identity for financial misuse, assume another individual’s identity, or to fraudulently obtain credit. As a result, there is increased risk that sensitive data and personally identifiable information may be compromised. Boundary protection pertains to the protection of a logical or physical boundary around a set of information resources and implementing measures to prevent unauthorized information exchange across the boundary in either direction. Organizations physically allocate publicly accessible information system components to separate subnetworks with separate physical network interfaces, and they prevent public access into their internal networks. Unnecessary connectivity to an organization’s network increases not only the number of access paths that must be managed and the complexity of the task, but the risk of unauthorized access in a shared environment. Several agencies continue to demonstrate vulnerabilities in establishing required boundary protection mechanisms. For example, one agency did not configure a remote access application properly, which permitted simultaneous access to the Internet and the internal network. This could allow an attacker who compromised a remote user’s computer to remotely control the user’s secure session from the Internet. Another agency failed to ensure that its contractor adequately implemented controls used to protect its external and key internal boundaries. Specifically, certain network devices did not adequately restrict external communication traffic. As a result, an unauthorized individual could exploit these vulnerabilities to launch attacks against other sensitive network devices. Cryptography underlies many of the mechanisms used to enforce the confidentiality and integrity of critical and sensitive information. A basic element of cryptography is encryption. Encryption can be used to provide basic data confidentiality and integrity, by transforming plain text into cipher text using a special value known as a key and a mathematical process known as an algorithm. The National Security Agency also recommends disabling protocols that do not encrypt information transmitted across the network, such as user identification and password combinations. Many agencies did not encrypt certain information traversing its networks, but instead used clear text protocols that make network traffic susceptible to eavesdropping. For example, at one agency’s field site, all information, including user identification and password information, was being sent across the network in clear text. At another agency, the contractor did not consistently apply encryption to protect network configuration data stored on network devices. These weaknesses could allow an attacker, or malicious user, to view information and use that knowledge to obtain sensitive financial and system data being transmitted over the network. To establish individual accountability, monitor compliance with security policies, and investigate security violations, it is crucial to determine what, when, and by whom specific actions have been taken on a system. Organizations accomplish this by implementing system or security software that provides an audit trail, or logs of system activity, that they can use to determine the source of a transaction or attempted transaction and to monitor users’ activities. The way in which organizations configure system or security software determines the nature and extent of information that can be provided by the audit trail. To be effective, organizations should configure their software to collect and maintain audit trails that are sufficient to track security-relevant events. Agencies did not sufficiently log and monitor key security- and audit- related events. For instance, agencies did not prepare key security reports such as failed login attempt reports. In other cases, logging either was disabled or configured to overwrite, or procedures for classifying and investigating security–related events had not been documented. As a result, unauthorized access could go undetected, and the ability to trace or recreate events in the event of a system modification or disruption could be diminished. Physical security controls are important for protecting computer facilities and resources from espionage, sabotage, damage, and theft. These controls restrict physical access to computer resources, usually by limiting access to the buildings and rooms in which the resources are housed and by periodically reviewing the access granted, in order to ensure that access continues to be appropriate. Examples of physical security controls include perimeter fencing, surveillance cameras, security guards, and locks. Several agencies also lacked effective physical security controls. Consequently, critical information held by the federal government, such as Social Security numbers or other personal data, can be at acute risk of unnecessary or unauthorized access by individuals intent on perpetrating identity theft and committing financial crimes. For example, one agency granted over 400 individuals unrestricted access to an entire data center— including a sensitive area within the data center—although their job functions did not require them to have such access. In another case, one agency did not adequately protect the entrances to its facilities, as visitor screening procedures were inconsistently implemented and available tools were not being used properly or to their fullest capability. Many of the data losses that occurred at federal agencies over the past few years, discussed earlier in this report, were a result of physical thefts or improper safeguarding of systems, including laptops and other portable devices. Configuration management controls ensure that only authorized and fully tested software is placed in operation. These controls, which also limit and monitor access to powerful programs and sensitive files associated with computer operations, are important in providing reasonable assurance that access controls are not compromised and that the system will not be impaired. These policies, procedures, and techniques help ensure that all programs and program modifications are properly authorized, tested, and approved. Further, patch management is an important element in mitigating the risks associated with software vulnerabilities. Up-to-date patch installation could help mitigate vulnerabilities associated with flaws in software code that could be exploited to cause significant damage— including the loss of control of entire systems—thereby enabling malicious individuals to read, modify, or delete sensitive information or disrupt operations. At least 20 major agencies demonstrated weaknesses in configuration management controls. For example, many agencies did not consistently configure network devices and services to prevent unauthorized access and ensure system integrity, such as installing critical software patches in a timely manner. As a result, systems and devices were not updated and were left susceptible to denial-of-service attacks or to malicious users exploiting software vulnerabilities. In light of the recent surge in zero-day exploits, it is imperative for agencies to be prepared for the challenge of testing and deploying patches under a very compressed time frame. Additionally, certain agencies did not implement effective controls to ensure that system software changes were properly authorized, documented, tested, and monitored. Instances also existed where agencies did not maintain current documentation of major modifications to systems or significant changes in processing. Inadequate configuration management controls increases the risk that unauthorized programs or changes could be inadvertently or deliberately placed into operation. Segregation of duties refers to the policies, procedures, and organizational structure that helps ensure that one individual cannot independently control all key aspects of a process or computer-related operation and, thereby, conduct unauthorized actions or gain unauthorized access to assets or records. Proper segregation of duties is achieved by dividing responsibilities among two or more individuals or organizational groups. Dividing duties among individuals or groups diminishes the likelihood that errors and wrongful acts will go undetected because the activities of one individual or group will serve as a check on the activities of the other. At least 13 agencies did not appropriately segregate information technology duties. These agencies generally did not assign employee duties and responsibilities in a manner that segregated incompatible functions among individuals or groups of individuals. For instance, at one agency, users were allowed to both initiate and authorize the same transaction. At another agency, financial management staff members were permitted to perform both security and systems administration duties for the application, potentially allowing these staff members to conduct fraudulent activity without being detected. Without adequate segregation of duties, there is an increased risk that erroneous or fraudulent actions can occur, improper program changes implemented, and computer resources damaged or destroyed. An organization must take steps to ensure that it is adequately prepared to cope with the loss of operational capabilities due to an act of nature, fire, accident, sabotage, or any other disruption. An essential element in preparing for such catastrophes is an up-to-date, detailed, and fully tested continuity of operations plan. Such a plan should cover all key computer operations and should include planning for business continuity. This plan is essential for helping to ensure that critical information systems, operations, and data such as financial processing and related records can be properly restored if a disaster occurs. To ensure that the plan is complete and fully understood by all key staff, it should be tested— including surprise tests—and test plans and results documented to provide a basis for improvement. If continuity of operations controls are inadequate, even relatively minor interruptions can result in lost or incorrectly processed data, which can cause financial losses, expensive recovery efforts, and inaccurate or incomplete mission-critical information. Although agencies have reported advances in the number of systems for which contingency plans have been tested, at least 21 agencies still demonstrated shortcomings in their continuity of operations planning. For example, one agency did not have a plan that reflected its current operating environment. Another agency had 17 individual disaster recovery plans covering various segments of the organization, but it did not have an overall document that integrated the 17 separate plans and defined the roles and responsibilities for the disaster recovery teams. In another example, the agency had not established an alternate processing site for a key application, or tested the plan. Until agencies complete actions to address these weaknesses, they are at risk of not being able to appropriately recover in a timely manner from certain service disruptions. An underlying cause for information security weaknesses identified at federal agencies is that they have not yet fully or effectively implemented agencywide information security programs. An agencywide security program, required by FISMA, provides a framework and continuing cycle of activity for assessing and managing risk, developing and implementing security policies and procedures, promoting security awareness and training, monitoring the adequacy of the entity’s computer-related controls through security tests and evaluations, and implementing remedial actions as appropriate. Without a well-designed program, security controls may be inadequate; responsibilities may be unclear, misunderstood, and improperly implemented; and controls may be inconsistently applied. Such conditions may lead to insufficient protection of sensitive or critical resources. At least 18 of the 24 major federal agencies had not fully or effectively implemented agencywide information security programs. Results of our recent work illustrate that agencies often did not adequately design or effectively implement policies for elements key to an information security program. We identified weaknesses in information security program activities, such as agencies’ risk assessments, information security policies and procedures, security planning, security training, system tests and evaluations, and remedial action plans. Identifying and assessing information security risks are essential to determining what controls are required. Moreover, by increasing awareness of risks, these assessments can generate support for the adopted policies and controls in order to help ensure their intended operation. Our evaluations at agencies show that they have not fully implemented risk assessment processes. Furthermore, they did not always effectively evaluate potential risks for the systems we reviewed. For example, one agency had no documented process for conducting risk assessments, while another agency had outdated risk assessments. In another agency, we determined that they had assessed the risk levels for their systems, categorized them on the basis of risk, and had current risk assessments that documented residual risk assessed and potential threats, and recommended corrective actions for reducing or eliminating the vulnerabilities they identified. However, that agency did not identify many of the vulnerabilities we found and had not subsequently assessed the risks associated with them. As a result of these weaknesses, inadequate or inappropriate security controls may be implemented that do not address the systems’ true risk, and potential risks to these systems may remain unknown. Although agencies have developed and documented information security policies, standards, and guidelines for information security, they did not always provide specific guidance on how to guard against significant security weaknesses. For example, policies lacked guidance on how to correctly configure certain identifications used by operating systems and the powerful programs used to control processing. We also found weaknesses in policies regarding physical access, Privacy Act-protected data, wireless configurations, and business impact analyses. As a result, agencies have reduced assurance that their systems and the information they contain are sufficiently protected. Instances exist where security plans were incomplete or not up-to-date. For example, one agency had systems security plans that were missing required information, such as rules of behavior and controls for public access. At that same agency, one security plan did not identify its system owner. In another instance, requirements for applications were not integrated into the security plan for the general support system, and the interconnectivity of the current system environment was not completely addressed. As a result, agencies’ cannot ensure that appropriate controls are in place to protect key systems and critical information. People are one of the weakest links in attempts to secure systems and networks. Therefore, an important component of an information security program is providing required training so that users understand system security risks and their own role in implementing related policies and controls to mitigate those risks. However, we identified instances where agencies did not ensure all information security employees and contractors, including those who have significant information security responsibilities, received sufficient training. Agencies’ policies and procedures for performing periodic testing and evaluation of information security controls were not always adequate. Our report on testing and evaluating security controls revealed that agencies had not adequately designed and effectively implemented policies for testing their security controls in accordance with OMB and NIST guidance. Agencies did not have policies that addressed how to determine the depth and breadth of testing according to risk. Further, agencies did not always address other important elements, such as the definition of roles and responsibilities of personnel performing tests, identification and testing of security controls common to multiple systems, and the frequency of periodic testing. In other cases, agencies had not tested controls for all of their systems. Without appropriate tests and evaluations, agencies have limited assurance that policies and controls are appropriate and working as intended. Additionally, increased risk exists that undetected vulnerabilities could be exploited to allow unauthorized access to sensitive information. Remedial Action Processes and Plans Our work uncovered weaknesses in agencies’ remediation processes and plans used to document remedial actions. For example, our report on security controls testing revealed that seven agencies did not have policies to describe a process for incorporating weaknesses identified during periodic security control testing into remedial actions. In our other reviews, agencies indicated that they had corrected or mitigated weaknesses; however, we found that those weaknesses still existed. In addition, we reviewed agencies’ system self-assessments and identified weaknesses not documented in their remedial action plans. These weaknesses pertained to system audit trails, approval and distribution of continuity of operations plans, and documenting emergency procedures. We also found that some deficiencies had not been corrected in a timely manner. Without a mature process and effective remediation plans, risk increases that vulnerabilities in agencies’ systems will not be mitigated in an effective and timely manner. Until agencies effectively and fully implement agencywide information security programs, federal data and systems will not be adequately safeguarded to prevent disruption, unauthorized use, disclosure, and modification. Further, until agencies implement our recommendations to correct specific information security control weaknesses, they remain at increased risk of attack or compromise. Persistent weaknesses are evident in numerous reports. Recent reports by GAO and IGs show that while agencies have made some progress, persistent weaknesses continue to place critical federal operations and assets at risk. In our reports, we have made hundreds of recommendations to agencies to correct specific information security weaknesses. The following examples illustrate the effect of these weaknesses at various agencies and for critical systems. Independent external auditors identified over 130 information technology control weaknesses affecting the Department of Homeland Security’s (DHS) financial systems during the audit of the department’s fiscal year 2006 financial statements. Weaknesses existed in all key general controls and application controls. For example, systems were not certified and accredited in accordance with departmental policy; policies and procedures for incident response were inadequate; background investigations were not properly conducted; and security awareness training did not always comply with departmental requirements. Additionally, users had weak passwords on key servers that process and house DHS financial data, and workstations, servers, and network devices were configured without necessary security patches. Further, changes to sensitive operating system settings were not always documented; individuals were able to perform incompatible duties such as changing, testing, and implementing software; and service continuity plans were not consistently or adequately tested. As a result, material errors in DHS’ financial data may not be detected in a timely manner. The Department of Health and Human Services (HHS) had not consistently implemented effective electronic access controls designed to prevent, limit, and detect unauthorized access to sensitive financial and medical information at its operating divisions and contractor-owned facilities. Numerous electronic access control vulnerabilities related to network management, user accounts and passwords, user rights and file permissions, and auditing and monitoring of security-related events existed in its computer networks and systems. In addition, weaknesses existed in controls designed to physically secure computer resources, conduct suitable background investigations, segregate duties appropriately, and prevent unauthorized changes to application software. These weaknesses increase the risk that unauthorized individuals can gain access to HHS information systems and inadvertently or deliberately disclose, modify, or destroy the sensitive medical and financial data that the department relies on to deliver its services. The Securities and Exchange Commission had made important progress addressing previously reported information security control weaknesses. However, we identified 15 new information security weaknesses pertaining to the access controls and configuration management existed in addition to 13 previously identified weaknesses that remain unresolved. For example, the Securities and Exchange Commission did not have current documentation on the privileges granted to users of a major application, did not securely configure certain system settings, or did not consistently install all patches to its systems. In addition, the commission did not sufficiently test and evaluate the effectiveness of controls for a major system as required by its certification and accreditation process. IRS had made limited progress toward correcting previously reported information security weaknesses at two data processing sites. IRS had not consistently implemented effective access controls to prevent, limit, or detect unauthorized access to computing resources from within its internal network. Those access controls included those related to user identification and authentication, authorization, cryptography, audit and monitoring, and physical security. In addition, IRS faces risks to its financial and sensitive taxpayer information due to weaknesses in configuration management, segregation of duties, media destruction and disposal, and personnel security controls. The Federal Aviation Administration (FAA) had significant weaknesses in controls that are designed to prevent, limit, and detect access to those systems. For example, for the systems reviewed, the agency was not adequately managing its networks, system patches, user accounts and passwords, or user privileges, and it was not always logging and auditing security-relevant events. In addition, FAA faces risks to its air traffic control systems due to weaknesses in physical security, background investigations, segregation of duties, and application change controls. As a result, it was at increased risk of unauthorized system access, possibly disrupting aviation operations. While acknowledging these weaknesses, agency officials stated that because portions of their systems are custom built and use older equipment with special-purpose operating systems, proprietary communication interfaces, and custom-built software, the possibilities for unauthorized access are limited. Nevertheless, the proprietary features of these systems do not protect them from attack by disgruntled current or former employees, who understand these features, or from more sophisticated hackers. The Federal Reserve Board (FRB) had not effectively implemented information system controls to protect sensitive data and computing resources for the distributed-based systems and the supporting network environment relevant to Treasury auctions. Specifically, the FRB did not consistently (1) identify and authenticate users to prevent unauthorized access; (2) enforce the principle of least privilege to ensure that authorized access was necessary and appropriate; (3) implement adequate boundary protections to limit connectivity to systems that process Bureau of the Public Debt (BPD) business; (4) apply strong encryption technologies to protect sensitive data in storage and on its networks; (5) log, audit, or monitor security-related events; and (6) maintain secure configurations on servers and workstations. As a result, auction information and computing resources for key distributed-based auction systems that the FRB maintain and operate on behalf of BPD are at an increased risk of unauthorized and possibly undetected use, modification, destruction, and disclosure. Furthermore, other FRB applications that share common network resources with the distributed-based systems may face similar risks. Although the Centers for Medicare and Medicaid Services had many information security controls in place that had been designed to safeguard the communication network, key information security controls were either missing or had not always been effectively implemented. For example, the network had control weaknesses in areas such as user identification and authentication, user authorization, system boundary protection, cryptography, and audit and monitoring of security-related events. Taken collectively, these weaknesses place financial and personally identifiable medical information transmitted on the network at increased risk of unauthorized disclosure and could result in a disruption in service. Certain information security controls over a critical internal Federal Bureau of Investigation (FBI) network reviewed were ineffective in protecting the confidentiality, integrity, and availability of information and information resources. Specifically, FBI did not consistently (1) configure network devices and services to prevent unauthorized insider access and ensure system integrity; (2) identify and authenticate users to prevent unauthorized access; (3) enforce the principle of least privilege to ensure that authorized access was necessary and appropriate; (4) apply strong encryption techniques to protect sensitive data on its networks; (5) log, audit, or monitor security-related events; (6) protect the physical security of its network; and (7) patch key servers and workstations in a timely manner. Collectively, these weaknesses place sensitive information transmitted on the network at risk of unauthorized disclosure or modification, and could result in a disruption of service, increasing the bureau’s vulnerability to insider threats. Federal agencies continue to report steady progress in implementing key information security requirements. Although agencies reported increases in OMB’s performance metrics, IGs identified various weaknesses in agencies’ implementation of FISMA requirements. Pursuant to its FISMA responsibilities, NIST has continued to issue standards and guidance. Also, agency IGs completed their annual evaluations, although scope and methodologies varied across agencies. Further, OMB expanded its guidance to agencies, with specific emphasis on personally identifiable information and reported to Congress as required. However, opportunities exist to improve reporting. For fiscal year 2006 reporting, governmentwide percentages increased for employees and contractors receiving security awareness training and employees with significant security responsibilities receiving specialized training. Percentages also increased for systems that had been tested and evaluated at least annually, systems with tested contingency plans, and systems that had been certified and accredited (see fig. 5). However, IGs at several agencies sometimes disagreed with the information reported by the agency and have identified weaknesses in the processes used to implement these and other security program activities. Federal agencies rely on their employees to protect the confidentiality, integrity, and availability of the information in their systems. It is critical for each system user to understand their security roles and responsibilities and be adequately trained to perform them. FISMA requires agencies to provide security awareness training to inform personnel—including contractors and other users of information systems that support the operations and assets of the agency—of information security risks associated with their activities and their responsibilities in complying with agency policies and procedures designed to reduce these risks. In addition, agencies are required to provide appropriate training on information security to personnel who have significant security responsibilities. OMB requires agencies to report on the following measures: (1) the number and percentage of employees and contractors who receive information security awareness training, (2) the number and percentage of employees who have significant security responsibilities and received specialized training, (3) whether peer-to-peer file sharing is addressed in security awareness training, and (4) the total amount of money spent on all security training for the fiscal year. Agencies reported improvements in the governmentwide percentage of employees and contractors receiving security awareness training. According to agency reporting, more than 90 percent of total employees and contractors governmentwide received security awareness training in fiscal year 2006. This is an increase from our 2005 report, in which approximately 81 percent of employees governmentwide received security awareness training. In addition, all agencies reported that they explained policies regarding peer-to-peer file sharing in security awareness training, ethics training, or other agencywide training, all addressed specifically in OMB guidance. Agencies also reported improvements in the number of employees who had significant security responsibilities and received specialized training. There has been a slight increase in the number of employees who have security responsibilities and received specialized security training since our last report—almost 86 percent of the selected employees had received specialized training in fiscal year 2006, compared with about 82 percent in fiscal year 2005. To achieve the goal of providing appropriate training to all employees, agencies reported spending an average of $19.28 per employee on security training. The amount of money spent by agencies on security training ranged from about $20,000 to more than $38 million. Although agencies have reported improvements in both the number of employees receiving security awareness training and the number of employees who have significant security responsibilities and received specialized training, several agencies exhibit training weaknesses. For example, according to agency IGs, five major agencies reported challenges in ensuring that contractors had received security awareness training. In addition, reports from IGs at two major agencies indicated that security training across components was inconsistent. Five agencies also noted that weaknesses still exist in ensuring that all employees who have specialized responsibilities receive specialized training, as policies and procedures for this type of training are not always clear. Further, the majority of agency IGs disagree with their agencies’ reporting of individuals who have received security awareness training. Figure 6 shows a comparison between agency and IG reporting of the percentage of employees receiving security awareness training. If all agency employees and contractors do not receive security awareness training, agencies risk security breaches resulting from user error or deliberate attack. Periodically evaluating the effectiveness of security policies and controls and acting to address any identified weaknesses are fundamental activities that allow an organization to manage its information security risks proactively, rather than reacting to individual problems ad hoc after a violation has been detected or an audit finding has been reported. Management control testing and evaluation as part of a program review is an additional source of information that can be considered along with controls testing and evaluation in IG and other independent audits to help provide a more complete picture of an agency’s security posture. FISMA requires that federal agencies periodically test and evaluate the effectiveness of their information security policies, procedures, and practices as part of implementing an agencywide security program. This testing is to be performed with a frequency depending on risk, but no less than annually, and consists of testing management, operational, and technical controls for every system identified in the agency’s required inventory of major information systems. For annual FISMA reporting, OMB requires that agencies report the number of agency and contractor systems for which security controls have been tested. In 2006, federal agencies reported testing and evaluating security controls for 88 percent of their systems, up from 73 percent in 2005, including increases in testing high-risk systems. However, shortcomings exist in agencies’ testing and evaluation of security controls. For example, the number of agencies testing and evaluating 90 percent or more of their systems decreased from 18 in 2005 to 16 in 2006 reporting. IGs also reported that not all systems had been tested and evaluated at least annually, including some high impact systems, and that weaknesses existed in agencies’ monitoring of contractor systems or facilities. As a result, agencies may not have reasonable assurance that controls are implemented correctly, are operating as intended, and are producing the desired outcome with respect to meeting the security requirements of the agency. In addition, agencies may not be fully aware of the security control weaknesses in their systems, thereby leaving the agencies’ information and systems vulnerable to attack or compromise. Continuity of operations planning ensures that agencies will be able to perform essential functions during any emergency or situation that disrupts normal operations. It is important that these plans be clearly documented, communicated to potentially affected staff, and updated to reflect current operations. In addition, testing contingency plans is essential to determining whether the plans will function as intended in an emergency situation. FISMA requires that agencywide information security programs include plans and procedures to ensure continuity of operations for information systems that support the operations and assets of the agency. To show the status of implementing contingency plans testing, OMB requires that agencies report the percentage of systems that have contingency plans that have been tested in accordance with policy and guidance. Federal agencies reported that 77 percent of total systems had contingency plans that had been tested, an increase from 61 percent. However, on average, high-risk systems had the smallest percentage of tested contingency plans—only 64 percent of high-risk systems had tested contingency plans. In contrast, agencies had tested contingency plans for 79 percent of moderate-risk systems, 80 percent of low-risk systems, and 70 percent of uncategorized systems. Several agencies had specific weaknesses in developing and testing contingency plans. For example, the IG of a major agency noted that contingency planning had not been completed for certain critical systems. Another major agency IG noted that the agency had weaknesses in three out of four tested contingency plans—the plans were inaccurate, incomplete, or outdated, did not meet department and federal requirements, and were not tested in accordance with department and federal government requirements. Without developing contingency plans and ensuring that they are tested, the agency increases its risk that it will not be able to effectively recover and continue operations when an emergency occurs. A complete and accurate inventory of major information systems is essential for managing information technology resources, including the security of those resources. The total number of agency systems is a key element in OMB’s performance measures, in that agency progress is indicated by the percentage of total systems that meet specific information security requirements such as testing systems annually, certifying and accrediting, and testing contingency plans. Thus, inaccurate or incomplete data on the total number of agency systems affects the percentage of systems shown as meeting the requirements. FISMA requires that agencies develop, maintain, and annually update an inventory of major information systems operated by the agency or under its control. Beginning with 2005 reporting, OMB no longer required agencies to report the status of their inventories, but required them to report the number of major systems and asked IGs to report on the status and accuracy of their agencies’ inventories. IGs reported that 18 agencies had completed approximately 96-100 percent of their inventories, an increase from 13 agencies in 2005. However, the total number of systems in some agencies’ inventories varied widely from 2005 to 2006. In one case, an agency had approximately a 300 percent increase in the number of systems, while another had approximately a 50 percent reduction in the number of its systems. IGs identified problems with agencies’ inventories. For example, IGs at two large agencies reported that their agencies still did not have complete inventories, while another questioned the reliability of its agency’s inventory since that agency relied on its components to report the number of systems and did not validate the numbers. Without complete, accurate inventories, agencies cannot effectively maintain and secure their systems. In addition, the performance measures used to assess agencies’ progress may not accurately reflect the extent to which these security practices have been implemented. As a key element of agencies’ implementation of FISMA requirements, OMB has continued to emphasize its long-standing policy of requiring a management official to formally authorize (or accredit) an information system to process information and accept the risk associated with its operation based on a formal evaluation (or certification) of the system’s security controls. For annual reporting, OMB requires agencies to report the number of systems, including impact levels, authorized for processing after completing certification and accreditation. OMB’s FISMA reporting instructions also requested IGs to assess and report on their agencies’ certification and accreditation process. Federal agencies continue to report increasing certification and accreditation from fiscal year 2005 reporting. For fiscal year 2006, 88 percent of agencies’ systems governmentwide were reported as certified and accredited, as compared with 85 percent in 2005. In addition, 23 agencies reported certifying and accrediting more than 75 percent of their systems, an increase from 21 agencies in 2005. However, the certification and accreditation percentage for uncategorized systems exceeded the percentages for all other impact categories and indicates that agencies may not be focusing their efforts properly. Although agencies reported increases in the overall percentage of systems certified and accredited, results of work by their IGs showed that agencies continue to experience weaknesses in the quality of this metric. As figure 7 depicts, 10 IGs rated their agencies’ certification and accreditation process as poor or failing, while in 2005, 7 IGs rated their agencies’ process as poor, and none rated it as failing. In at least three instances of agencies reporting certification and accreditation percentages over 90 percent, their IG reported that the process was poor. Moreover, IGs continue to identify specific weaknesses with key documents in the certification and accreditation process such as risk assessments and security plans not being completed consistent with NIST guidance or finding those items missing from certification and accreditation packages. In other cases, systems were certified and accredited, but controls or contingency plans were not properly tested. For example, IG reports highlighted weaknesses in security plans such as agencies not using NIST guidance, not identifying controls that were in place, not including minimum controls, and not updating plans to reflect current conditions. Because of these discrepancies and weaknesses, reported certification and accreditation progress may not be providing an accurate reflection of the actual status of agencies’ implementation of this requirement. Furthermore, agencies may not have assurance that accredited systems have controls in place that properly protect those systems. Risk-based policies and procedures cost-effectively reduce information security risks to an acceptable level and ensure that information security is addressed throughout the life cycle of each information system in their information security program; a key aspect of these policies and procedures is minimally acceptable configuration standards. Configuration standards minimize the security risks associated with specific software applications widely used in an agency or across agencies. Because IT products are often intended for a wide variety of audiences, restrictive security controls are usually not enabled by default, making the many products vulnerable before they are used. FISMA requires each agency to have policies and procedures that ensure compliance with minimally acceptable system configuration requirements, as determined by the agency. In fiscal year 2004, for the first time, agencies reported on the degree to which they had implemented security configurations for specific operating systems and software applications. For annual FISMA reporting, OMB requires agencies to report whether they have an agencywide security configuration policy; what products, running on agency systems, are covered by that policy; and to what extent the agency has implemented policies for those products. OMB also requested IGs to report this performance for their agencies. Agencies had not always implemented security configuration policies. Twenty-three of the major federal agencies reported that they currently had an agencywide security configuration policy. Although 21 IGs agreed that their agency had such a policy, they did not agree that the implementation was always as high as agencies reported. To illustrate, one agency reported implementing configuration policy for a particular platform 96 to 100 percent of the time, while their IG reported that the agency implemented that policy only 0 to 50 percent of the time. One IG noted that three of the agency’s components did not have overall configuration policies and that other components that did have the policies did not take into account applicable platforms. If minimally acceptable configuration requirements policies are not properly implemented and applied to systems, agencies will not have assurance that products are configured adequately to protect those systems, which could increase their vulnerability and make them easier to compromise. Although strong controls may not block all intrusions and misuse, organizations can reduce the risks associated with such events if they take steps to detect and respond to them before significant damage occurs. Accounting for and analyzing security problems and incidents are also effective ways for an organization to improve its understanding of threats and potential cost of security incidents, as well as pinpointing vulnerabilities that need to be addressed so that they are not exploited again. When incidents occur, agencies are to notify the federal information security incident center—U. S. Computer Emergency Readiness Team (US-CERT). US-CERT uses NIST’s definition of an incident (a “violation or imminent threat of violation of computer security policies, acceptable use policies, or standard computer security practices).” The categories defined by NIST and US-CERT are: Unauthorized access: In this category, an individual gains logical or physical access without permission to a federal agency’s network, system, application, data, or other resource. Denial of service: An attack that successfully prevents or impairs the normal authorized functionality of networks, systems, or applications by exhausting resources. This activity includes being the victim or participating in a denial of service attack. Malicious code: Successful installation of malicious software (e.g., virus, worm, Trojan horse, or other code-based malicious entity) that infects an operating system or application. Agencies are not required to report malicious logic that has been successfully quarantined by antivirus software. Improper usage: A person violates acceptable computing use policies. Scans/probes/attempted access: This category includes any activity that seeks to access or identify a federal agency computer, open ports, protocols, service, or any combination of these for later exploit. This activity does not directly result in a compromise or denial of service. Investigation: Unconfirmed incidents that are potentially malicious or anomalous activity deemed by the reporting entity to warrant further review. FISMA requires that agencies’ security programs include procedures for detecting, reporting, and responding to security incidents. NIST states that agencies are responsible for determining specific ways to meet these requirements. For FISMA reporting, OMB requires agencies to report numbers of incidents for the past fiscal year in addition to the number of incidents the agency reported to US-CERT and the number reported to law enforcement. According to the US-CERT annual report for fiscal year 2006, federal agencies reported a record number of incidents, with a notable increase in incidents reported in the second half of the year. As figure 8 shows, since 2005, the number of incidents reported to US-CERT increased in every category except for malicious code. Although agencies reported a record number of incidents, shortcomings exist in agencies’ security incident reporting procedures. The number of incidents reported is likely to be inaccurate because of inconsistencies in reporting at various levels. For example, one agency reported no incidents to US-CERT, although it reported more than 800 unsuccessful incidents internally and to law enforcement authorities. In addition, analysis of reports from three agencies indicated that procedures for reporting incidents locally were not followed—two where procedures for reporting incidents to law enforcement authorities were not followed, and one where procedures for reporting incidents to US-CERT were not followed. Several IGs also noted specific weaknesses in incident procedures such as components not reporting incidents reliably, information being omitted from incident reports, and reporting time requirements not being met. Without properly accounting for and analyzing security problems and incidents, agencies risk losing valuable information needed to prevent future exploits and understand the nature and cost of threats directed at the agency. Developing remedial action plans is key to ensuring that remedial actions are taken to address significant deficiencies and reduce or eliminate known vulnerabilities. These plans should list the weaknesses and show the estimated resource needs and the status of corrective actions. The plans are intended to assist agencies in identifying, assessing, prioritizing, and monitoring the progress of corrective efforts for security weaknesses found in programs and systems. FISMA requires that agency information security programs include a process for planning, implementing, evaluating, and documenting remedial actions to address any deficiencies in information security policies, procedures, and practices. For annual FISMA reporting, OMB requires agencies to report quarterly performance regarding their remediation efforts for all programs and systems where a security weakness has been identified. It also requested that IGs assess and report on whether their agency has developed, implemented, and managed an agencywide process for these plans. IGs reported weaknesses in their agency’s remediation process. According to IG assessments, 16 of the 24 major agencies did not almost always incorporate information security weaknesses for all systems into their remediation plans. They found that vulnerabilities from reviews were not always being included in remedial actions. They also highlighted other weaknesses that included one agency having an unreliable process for prioritizing weaknesses and another using inconsistent criteria for defining weaknesses to include in those plans. Without a sound remediation process, agencies cannot be assured that information security weaknesses are efficiently and effectively corrected. NIST plays a key role under FISMA in providing important standards and guidance. It is required, among other things, to develop and issue minimum information security standards. NIST has issued guidance through its FISMA Implementation Project and has also expanded its work through other security activities. After FISMA was enacted, NIST developed the FISMA Implementation Project to enable it to fulfill its statutory requirements in a timely manner. This project is divided into three phases. Phase I focuses on the development of a suite of required security standards and guidelines as well as other FISMA-related publications necessary to create a robust information security program and effectively manage risk to agency operations and assets. Standards and guidance issued during Phase I included standards for security categorization of federal information and information systems, minimum security requirements for federal information and information systems, and guidance for the recommended security controls for federal information systems. Phase I is nearly complete, with only one publication—a guide to assessing information security controls—remaining to be finalized. NIST has also developed many other documents to assist information security professionals. For example, NIST issued Special Publication 800- 80 to assist agencies in developing and implementing information security metrics. The processes and methodologies described link information security performance to agency performance by leveraging agency-level strategic planning processes. Additionally, in October 2006, NIST published Special Publication 800-100, which provides a broad overview of information security program elements to assist managers in understanding how to establish and implement an information security program. Phase II focuses on the development of a program for accrediting public and private sector organizations to conduct security certification services for federal agencies as part of agencies’ certification and accreditation requirements. Organizations that participate in the organizational accreditation program can demonstrate competency in the application of NIST security standards and guidelines. NIST conducted a workshop on Phase II implementation in April of 2006. It is scheduled to be completed in 2008. Phase III is the development of a program for validating security tools. The program is to rely on private sector, accredited testing laboratories to conduct evaluations of the security tools. NIST is to provide validation services and laboratory oversight. Implementation of this phase is planned for 2007 and 2008. In addition to the specific responsibilities to develop standards and guidance, other information security activities undertaken by NIST include: conducting workshops on the credentialing program for security conducting a presentation on automated security tools, providing a tutorial on security certification and accreditation of federal developing and maintaining a checklist repository of security configurations for specific IT products, developing, along with other federal agencies, the National Vulnerability Database, which includes a repository of standards based vulnerability management data as well as the security controls, control enhancements, and supplemental guidance from NIST Special Publication 800-53, and issuance of the Computer Security Division’s 2006 Annual Report as mandated by FISMA. Through NIST’s efforts in standards and guidance development and other activities, agencies have access to additional tools that can be applied to improve their information security programs. Additionally, NIST’s activities will provide federal agencies with opportunities to utilize private- sector resources in improving information security. FISMA requires agency IGs to perform an independent evaluation of the information security programs and practices of the agency to determine the effectiveness of such programs and practices. Each evaluation is to include (1) testing of the effectiveness of information security policies, procedures, and practices of a representative subset of the agency’s information systems and (2) assessing compliance (based on the results of the testing) with FISMA requirements and related information security policies, procedures, standards, and guidelines. These required evaluations are then submitted by each agency to OMB in the form of a template. In addition to the template submission, OMB encourages the IGs to provide any additional narrative in an appendix to the report to the extent they provide meaningful insight into the status of the agency’s security or privacy program. Although the IGs conducted annual evaluations, the scope and methodology of IGs’ evaluations varied across agencies. For example, According to their FISMA reports, certain IGs reported interviewing officials and reviewing agency documentation, while others indicated conducting tests of implementation plans (e.g. security plans). Mutiple IGs indicated in their scope and methodology sections of their reports that their reviews were focused on selected components, whereas others did not make any reference to the breadth of their review. Several reports were solely comprised of a summary of relevant information security audits conducted during the fiscal year, while others included additional evaluation that addressed specific FISMA-required elements, such as risk assessments and remedial actions. The percentage of systems reviewed varied; 22 of 24 IGs tested the information security program effectiveness on a subset of systems; two IGs did not review any systems. One IG noted missing Web applications and concluded that the agency’s inventory of major systems was only 0 to 50 percent complete, although it noted that, due to time constraints, it was unable to determine whether other items were missing. One IG office noted that although it had evaluated the agency’s configuration policy and certain aspects of the policy’s implementation, it did not corroborate the use of systems under configuration management. The IG did not independently corroborate whether agency systems ran the software, but instead reflected the agency’s response. Some reviews were limited due to difficulties in verifying information provided to them by agencies. Specifically, certain IGs stated that they were unable to conduct evaluations of their respective agency’s inventory because the information provided to them by the agency at that time was insufficient (i.e., incomplete or unavailable). The lack of a common methodology, or framework, has culminated in disparities in audit scope, methodology, and content. The President’s Council on Integrity and Efficiency (PCIE) has recognized the importance of having a framework and in September 2006 developed a tool to assist the IG community with conducting its FISMA evaluations. The framework consists of program and system control areas that map directly to the control areas identified in NIST Special Publication 800-100 and NIST Special Publication 800-53, respectively. According to PCIE members, the framework includes broad recommendations rather than a specific methodology due to the varying levels of resources available to each agency IG. This framework could provide a common approach to completing the required evaluations, and PCIE has encouraged IGs to use it. Although OMB has continued to expand its guidance provided to agencies to help improve information security at agencies, shortcomings exist in its reporting instructions. FISMA specifies that, among other responsibilities, OMB is to develop policies, principles, standards and guidelines on information security. Each year, OMB provides instructions to federal agencies and their IGs for FISMA annual reporting. OMB’s reporting instructions focus on performance measures such as certification and accreditation, testing of security controls, and security training. In its March 2007 report to Congress on fiscal year 2006 FISMA implementation, OMB noted the federal government’s modest progress in meeting key performance measures for IT security. In its report, OMB stressed that there are still areas requiring strategic and continued management attention. OMB identified progress in the following areas: system certification and accreditation, testing of security controls and contingency plans, assigning risk levels to systems, training employees in security, and reporting incidents. OMB indicated the following areas require continued management attention: the quality of certification and accreditations, oversight of contractor systems, and agencywide plan of action and milestones process. The OMB report also discusses a plan of action to improve performance, assist agencies in their information security activities, and promote compliance with statutory and policy requirements. To help agencies protect sensitive data from security incidents, OMB has issued several policy memorandums over the past 13 months. For example, OMB has sent memorandums to agencies to reemphasize their responsibilities under law and policy to (1) appropriately safeguard sensitive and personally identifiable information, (2) train employees on their responsibilities to protect sensitive information, and (3) report security incidents. In May 2007, OMB issued additional detailed guidelines to agencies on safeguarding against and responding to the breach of personally identifiable information, including developing and implementing a risk-based breach notification policy, reviewing and reducing current holdings of personal information, protecting federal information accessed remotely, and developing and implementing a policy outlining the rules of behavior, as well as identifying consequences and potential corrective actions for failure to follow these rules. OMB also issued a memorandum to agencies concerning adherence to specific configuration standards for Windows Vista and XP operating systems. This memorandum requires agencies, with these operating systems and/or plans of upgrading to these operating systems, to adopt the standard security configurations (developed through consensus among DHS, NIST, and the Department of Defense) by February 1, 2008. Agencies were also required to provide OMB with their implementation plans for these platforms by May 1, 2007. Periodic reporting of performance measures for FISMA requirements and related analysis provides valuable information on the status and progress of agency efforts to implement effective security management programs; however, opportunities exist to enhance reporting under FISMA and the independent evaluations completed by IGs. In previous reports, we have recommended that OMB improve FISMA reporting by clarifying reporting instructions and requesting IGs to report on the quality of additional performance metrics. In response, OMB has taken steps to enhance its reporting instructions. For example, OMB added questions regarding incident detection and assessments of system inventory. OMB has also recognized the need for assurance of quality for agency processes. For example, OMB specifically requested that the IGs evaluate the certification and accreditation process. The qualitative assessments of the process allow the IG to rate its agency’s certification and accreditation process using the terms “excellent,” “good,” “satisfactory,” “poor,” or “failing.” Despite these enhancements, the current metrics do not measure how effectively agencies are performing various activities. Current performance measures offer limited assurance of the quality of agency processes that implement key security policies, controls, and practices. For example, agencies are required to test and evaluate the effectiveness of the controls over their systems at least once a year and to report on the number of systems undergoing such tests. However, there is no measure of the quality of agencies’ test and evaluation processes. Similarly, OMB’s reporting instructions do not address the quality of other activities such as risk categorization, security awareness training, or incident reporting. Providing information on the quality of the processes used to implement key control activities would further enhance the usefulness of the annually reported data for management and oversight purposes. Further, OMB reporting guidance and performance measures do not include complete reporting on a key FISMA-related activity. FISMA requires each agency to include policies and procedures in its security program that ensure compliance with minimally acceptable system configuration requirements, as determined by the agency. As we previously reported, maintaining up-to-date patches is key to complying with this requirement. As such, we recommended that OMB address patch management in its FISMA reporting instructions. Although OMB addressed patch management in its 2004 FISMA reporting instructions, it no longer requests this information. Our recent reports have identified weaknesses in agencies’ patch management processes, leaving federal information systems exposed to vulnerabilities associated with flaws in software code that could be exploited to cause significant damage—including the loss of control of entire systems—thereby enabling malicious individuals to read, modify, or delete sensitive information or disrupt operations. Without information on agencies’ patch management processes, OMB and the Congress lack information that could demonstrate whether or not agencies are taking appropriate steps for protecting their systems. Persistent governmentwide weaknesses in information security controls threaten the confidentiality, integrity, and availability of the sensitive data maintained by federal agencies. Weaknesses exist predominantly in access controls, including authentication and identification, authorization, cryptography, audit and monitoring, boundary protection, and physical security. Weaknesses also exist in configuration management, segregation of duties and continuity of operations. Until agencies ensure that their information security programs are fully and effectively implemented, there is limited assurance that sensitive data will be adequately protected against unauthorized disclosure or modification or that services will not be interrupted. These weaknesses leave federal agencies vulnerable to external as well as internal threats. Until agencies fully and effectively implement their information security programs, including addressing the hundreds of recommendations that we and IGs have made, federal systems will remain at increased risk of attack or compromise. Despite federal agencies’ reported progress and increased activities, weaknesses remain in the processes agencies use for implementing FISMA performance measures such as those related to agency risk management. In addition, NIST, the IGs, and OMB have all made progress toward fulfilling their requirements. However, the metrics specified in current reporting guidance do not measure how effectively agencies are performing various activities and the guidance does not address a key activity. The absence of this information could result in reporting that does not adequately reflect the status of agency implementation of required information security policies and procedures. Subsequently, oversight entities may not be receiving information critical for monitoring agency compliance with FISMA’s statutory requirements for an information security program. Because annual reporting is critical to monitoring agencies’ implementation of information security requirements, we recommend that the Director of OMB take the following three actions in revising future FISMA reporting guidance: Develop additional performance metrics that measure the effectiveness of FISMA activities. Request inspectors general to report on the quality of additional agency information security processes, such as system test and evaluation, risk categorization, security awareness training, and incident reporting. Require agencies to report on a key activity—patch management. We received written comments on a draft of this report from the Administrator, Office of E-Government and Information Technology, OMB (see app. II). The Administrator agreed to take our recommendations under advisement when the Office modifies its FISMA reporting instructions. In addition, the Administrator pointed out that the certification and accreditation process provides a systemic approach for determining whether appropriate security controls are in place, functioning properly, and producing the desired outcome. She further noted that OMB’s current instructions for IGs to evaluate the quality of agencies’ certification and accreditation process provide the flexibility for IGs to tailor their evaluations based on documented weaknesses and plans for improvement. We are sending copies of this report to the Chairmen and Ranking Members of the Senate Committee on Homeland Security and Governmental Affairs and the House Committee on Oversight and Government Reform and to the Office of Management and Budget. We will also make copies available to others on request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions regarding this report, please contact me at (202) 512-6244 or by e-mail at [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. In accordance with the Federal Information Security Management Act of 2002 (FISMA) requirement that the Comptroller General report periodically to Congress, our objectives were to evaluate (1) the adequacy and effectiveness of agencies’ information security policies and practices and (2) federal agency implementation of FISMA requirements. To assess the adequacy and effectiveness of agency information security policies and practices, we analyzed our related reports issued from May 2005 through May 2007. We also reviewed and analyzed the information security work and products of the agency inspectors general. Both our reports and the Inspector(s) General products generally used the methodology contained in The Federal Information System Controls Audit Manual. Further, we reviewed and analyzed data on information security in federal agencies’ performance and accountability reports. To assess implementation of FISMA requirements, we reviewed and analyzed the act (Title III, Pub. L. No. 107-347) and the 24 major federal agencies’ chief information officer and IG FISMA reports for fiscal years 2004 to 2006, as well as the performance and accountability reports for those agencies; the Office of Management and Budget’s FISMA reporting instructions, mandated annual reports to Congress, and other guidance; and the National Institute of Standards and Technology’s standards, guidance, and annual reports. We also held discussions with agency officials and the agency inspectors general to further assess the implementation of FISMA requirements. We did not include systems categorized as national security systems in our review, nor did we review the adequacy or effectiveness of the security policies and practices for those systems. Our work was conducted in Washington, D.C. from February 2007 through June 2007 in accordance with generally accepted government auditing standards. In addition to the individual named above, Jeffrey Knott (Assistant Director); Eric Costello; Larry Crosland; Nancy Glover; Min Hyun; and Jayne Wilson made key contributions to this report. Information Security: FBI Needs to Address Weaknesses in Critical Network. GAO-07-368. Washington, D.C.: April 30, 2007. Information Security: Persistent Weaknesses Highlight Need for Further Improvement. GAO-07-751T. Washington, D.C.: April 19, 2007. Information Security: Further Efforts Needed to Address Significant Weaknesses at the Internal Revenue Service. GAO-07-364. Washington, D.C.: March 30, 2007. Information Security: Sustained Progress Needed to Strengthen Controls at the Securities and Exchange Commission. GAO-07-256. Washington, D.C.: March 27, 2007. Information Security: Veterans Affairs Needs to Address Long-Standing Weaknesses. GAO-07-532T. Washington, D.C.: February 28, 2007. Information Security: Agencies Need to Develop and Implement Adequate Policies for Periodic Testing. GAO-07-65. Washington, D.C.: October 20, 2006. Information Security: Coordination of Federal Cyber Security Research and Development. GAO-06-811. Washington, D.C.: September 29, 2006. Information Security: Federal Deposit Insurance Corporation Needs to Improve Its Program. GAO-06-620. Washington, D.C.: August 31, 2006. Information Security: Federal Reserve Needs to Address Treasury Auction Systems. GAO-06-659. Washington, D.C.: August 30, 2006. Information Security: The Centers for Medicare & Medicaid Services Needs to Improve Controls over Key Communication Network. GAO-06-750. Washington, D.C.: August 30, 2006. Information Security: Leadership Needed to Address Weaknesses and Privacy Issues at Veterans Affairs. GAO-06-897T. Washington, D.C.: June 20, 2006. Veterans Affairs: Leadership Needed to Address Information Security Weaknesses and Privacy Issues. GAO-06-866T. Washington, D.C.: June 14, 2006. Information Security: Securities and Exchange Commission Needs to Continue to Improve Its Program. GAO-06-408. Washington, D.C.: March 31, 2006. Information Assurance: National Partnership Offers Benefits, but Faces Considerable Challenges. GAO-06-392. Washington, D.C.: March 24, 2006. Information Security: Continued Progress Needed to Strengthen Controls at the Internal Revenue Service. GAO-06-328. Washington, D.C.: March 23, 2006. Bureau of the Public Debt: Areas for Improvement in Information Security Controls. GAO-06-522R. Washington, D.C.: March 16, 2006. Information Security: Federal Agencies Show Mixed Progress in Implementing Statutory Requirements. GAO-06-527T. Washington, D.C.: March 16, 2006. Information Security: Department of Health and Human Services Needs to Fully Implement Its Program. GAO-06-267. Washington, D.C.: February 24, 2006. Information Security: The Defense Logistics Agency Needs to Fully Implement Its Security Program. GAO-06-31. Washington, D.C.: October 7, 2005. Information Security: Progress Made, but Federal Aviation Administration Needs to Improve Controls over Air Traffic Control Systems. GAO-05-712. Washington, D.C.: August 26, 2005. Information Security: Weaknesses Persist at Federal Agencies Despite Progress Made in Implementing Related Statutory Requirements. GAO-05-552. Washington, D.C.: July 15, 2005. Information Security: Key Considerations Related to Federal Implementation of Radio Frequency Identification Technology. GAO-05-849T. Washington, D.C.: June 22, 2005. Information Security: Department of Homeland Security Needs to Fully Implement Its Security Program. GAO-05-700. Washington, D.C.: June 17, 2005. Information Security: Radio Frequency Identification Technology in the Federal Government. GAO-05-551. Washington, D.C.: May 27, 2005. IRS Modernization: Continued Progress Requires Addressing Resource Management Challenges. GAO-05-707T. Washington, D.C.: May 19, 2005. | For many years, GAO has reported that weaknesses in information security are a widespread problem with potentially devastating consequences--such as intrusions by malicious users, compromised networks, and the theft of personally identifiable information--and has identified information security as a governmentwide high-risk issue. Concerned by reports of significant vulnerabilities in federal computer systems, Congress passed the Federal Information Security Management Act of 2002 (FISMA), which permanently authorized and strengthened the information security program, evaluation, and reporting requirements for federal agencies. As required by FISMA to report periodically to Congress, in this report GAO discusses the adequacy and effectiveness of agencies' information security policies and practices and agencies' implementation of FISMA requirements. To address these objectives, GAO analyzed agency, inspectors general (IG), Office of Management and Budget (OMB), congressional, and GAO reports on information security. Significant weaknesses in information security policies and practices threaten the confidentiality, integrity, and availability of critical information and information systems used to support the operations, assets, and personnel of most federal agencies. Recently reported incidents at federal agencies have placed sensitive data at risk, including the theft, loss, or improper disclosure of personally identifiable information on millions of Americans, thereby exposing them to loss of privacy and identity theft. Almost all of the major federal agencies had weaknesses in one or more areas of information security controls. Most agencies did not implement controls to sufficiently prevent, limit, or detect access to computer resources. In addition, agencies did not always manage the configuration of network devices to prevent unauthorized access and ensure system integrity, such as patching key servers and workstations in a timely manner; assign incompatible duties to different individuals or groups so that one individual does not control all aspects of a process or transaction; or maintain or test continuity of operations plans for key information systems. An underlying cause for these weaknesses is that agencies have not fully implemented their information security programs. As a result, agencies may not have assurance that controls are in place and operating as intended to protect their information resources, thereby leaving them vulnerable to attack or compromise. Nevertheless, federal agencies have continued to report steady progress in implementing certain information security requirements. For fiscal year 2006, agencies generally reported performing various control activities for an increasing percentage of their systems and personnel. However, IGs at several agencies disagreed with the information the agency reported and identified weaknesses in the processes used to implement these activities. Further, although OMB enhanced its reporting instructions to agencies for preparing fiscal year 2006 FISMA reports, the metrics specified in the instructions do not measure how effectively agencies are performing various activities, and there are no requirements to report on a key activity. As a result, reporting may not adequately reflect the status of agency implementation of required information security policies and procedures. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.